Jan 20 11:04:30 crc systemd[1]: Starting Kubernetes Kubelet... Jan 20 11:04:30 crc restorecon[4689]: Relabeled /var/lib/kubelet/config.json from system_u:object_r:unlabeled_t:s0 to system_u:object_r:container_var_lib_t:s0 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/device-plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/device-plugins/kubelet.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/volumes/kubernetes.io~configmap/nginx-conf/..2025_02_23_05_40_35.4114275528/nginx.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/22e96971 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/21c98286 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8/containers/networking-console-plugin/0f1869e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c15,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/46889d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/5b6a5969 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/setup/6c7921f5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4804f443 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/2a46b283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/a6b5573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/4f88ee5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c225,c458 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/5a4eee4b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c963 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/d1b160f5dda77d281dd8e69ec8d817f9/containers/kube-rbac-proxy-crio/cd87c521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c215,c682 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_33_42.2574241751/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/38602af4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/1483b002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/0346718b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/d3ed4ada not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/3bb473a5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/8cd075a9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/00ab4760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/containers/router/54a21c09 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c24 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/70478888 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/43802770 not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/955a0edc not reset as customized by admin to system_u:object_r:container_file_t:s0:c176,c499 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/bca2d009 not reset as customized by admin to system_u:object_r:container_file_t:s0:c140,c1009 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/37a5e44f-9a88-4405-be8a-b645485e7312/containers/network-operator/b295f9bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c589,c726 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..2025_02_23_05_21_22.3617465230/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-binary-copy/cnibincopy.sh not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..2025_02_23_05_21_22.2050650026/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes/kubernetes.io~configmap/cni-sysctl-allowlist/allowlist.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/bc46ea27 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5731fc1b not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/egress-router-binary-copy/5e1b2a3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/943f0936 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/3f764ee4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/cni-plugins/8695e3f9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/aed7aa86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/c64d7448 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/bond-cni-plugin/0ba16bd2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/207a939f not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/54aa8cdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/routeoverride-cni/1f5fa595 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/bf9c8153 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/47fba4ea not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni-bincopy/7ae55ce9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7906a268 not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/ce43fa69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/whereabouts-cni/7fc7ea3a not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/d8c38b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c203,c924 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/9ef015fb not reset as customized by admin to system_u:object_r:container_file_t:s0:c138,c778 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/containers/kube-multus-additional-cni-plugins/b9db6a41 not reset as customized by admin to system_u:object_r:container_file_t:s0:c574,c582 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/b1733d79 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/afccd338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/network-metrics-daemon/9df0a185 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/18938cf8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c476,c820 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/7ab4eb23 not reset as customized by admin to system_u:object_r:container_file_t:s0:c272,c818 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/containers/kube-rbac-proxy/56930be6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c432,c991 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_35.630010865 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..2025_02_23_05_21_35.1088506337/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes/kubernetes.io~configmap/ovnkube-config/ovnkube.conf not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/0d8e3722 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/d22b2e76 not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/kube-rbac-proxy/e036759f not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/2734c483 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/57878fe7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/3f3c2e58 not reset as customized by admin to system_u:object_r:container_file_t:s0:c89,c211 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/375bec3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c382,c850 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/containers/ovnkube-cluster-manager/7bc41e08 not reset as customized by admin to system_u:object_r:container_file_t:s0:c440,c975 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/48c7a72d not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/4b66701f not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/containers/download-server/a5a1c202 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..2025_02_23_05_21_40.3350632666/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-cert-acceptance-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/ovnkube-identity-cm/additional-pod-admission-cond.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..2025_02_23_05_21_40.1388695756 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/volumes/kubernetes.io~configmap/env-overrides/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/26f3df5b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/6d8fb21d not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/webhook/50e94777 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208473b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/ec9e08ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3b787c39 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/208eaed5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/93aa3a2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/ef543e1b-8068-4ea3-b32a-61027b32e95d/containers/approver/3c697968 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c22 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/ba950ec9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/cb5cdb37 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3b6479f0-333b-4a96-9adf-2099afdc2447/containers/network-check-target-container/f2df9827 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..2025_02_23_05_22_30.473230615/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_24_06_22_02.1904938450/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/fedaa673 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/9ca2df95 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/machine-config-operator/b2d7460e not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2207853c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/241c1c29 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/containers/kube-rbac-proxy/2d910eaf not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/..2025_02_23_05_23_49.3726007728/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/..2025_02_23_05_23_49.841175008/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/etcd-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178 not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.843437178/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/c6c0f2e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/399edc97 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8049f7cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/0cec5484 not reset as customized by admin to system_u:object_r:container_file_t:s0:c263,c871 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/312446d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c406,c828 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/containers/etcd-operator/8e56a35d not reset as customized by admin to system_u:object_r:container_file_t:s0:c84,c419 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.133159589/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/2d30ddb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/eca8053d not reset as customized by admin to system_u:object_r:container_file_t:s0:c380,c909 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/c3a25c9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c168,c522 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/containers/kube-controller-manager-operator/b9609c22 not reset as customized by admin to system_u:object_r:container_file_t:s0:c108,c511 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/e8b0eca9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/b36a9c3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/dns-operator/38af7b07 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/ae821620 not reset as customized by admin to system_u:object_r:container_file_t:s0:c106,c418 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/baa23338 not reset as customized by admin to system_u:object_r:container_file_t:s0:c529,c711 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/containers/kube-rbac-proxy/2c534809 not reset as customized by admin to system_u:object_r:container_file_t:s0:c968,c969 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3532625537/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/59b29eae not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/c91a8e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c381 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/4d87494a not reset as customized by admin to system_u:object_r:container_file_t:s0:c442,c857 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/containers/kube-scheduler-operator-container/1e33ca63 not reset as customized by admin to system_u:object_r:container_file_t:s0:c661,c999 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/8dea7be2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d0b04a99 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/kube-rbac-proxy/d84f01e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/4109059b not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/a7258a3e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/containers/package-server-manager/05bdf2b6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/f3261b51 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/315d045e not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/5fdcf278 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/d053f757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/containers/control-plane-machine-set-operator/c2850dc7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..2025_02_23_05_22_30.2390596521/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes/kubernetes.io~configmap/marketplace-trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fcfb0b2b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c7ac9b7d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/fa0c0d52 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/c609b6ba not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/2be6c296 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/89a32653 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/4eb9afeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/containers/marketplace-operator/13af6efa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/b03f9724 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/e3d105cc not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/containers/olm-operator/3aed4d83 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1906041176/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/0765fa6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/2cefc627 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/3dcc6345 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/containers/kube-storage-version-migrator-operator/365af391 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c18 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-SelfManagedHA-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-TechPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-DevPreviewNoUpgrade.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes/kubernetes.io~empty-dir/available-featuregates/featureGate-Hypershift-Default.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b1130c0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/236a5913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-api/b9432e26 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/5ddb0e3f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/986dc4fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/8a23ff9a not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/9728ae68 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/containers/openshift-config-operator/665f31d0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c12 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1255385357/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/..2025_02_23_05_23_57.573792656/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/service-ca-bundle/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_22_30.3254245399/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes/kubernetes.io~configmap/trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/136c9b42 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/98a1575b not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/cac69136 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/5deb77a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/containers/authentication-operator/2ae53400 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3608339744/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes/kubernetes.io~configmap/config/operator-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/e46f2326 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/dc688d3c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/3497c3cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/containers/service-ca-operator/177eb008 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c16 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.3819292994/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:30 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/af5a2afa not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/d780cb1f not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/49b0f374 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/containers/openshift-apiserver-operator/26fbb125 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.3244779536/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/cf14125a not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/b7f86972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/e51d739c not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/88ba6a69 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/669a9acf not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/5cd51231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/75349ec7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/15c26839 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/45023dcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/ingress-operator/2bb66a50 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/64d03bdd not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/ab8e7ca0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/containers/kube-rbac-proxy/bb9be25f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c11 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_22_30.2034221258/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/9a0b61d3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/d471b9d2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/containers/cluster-image-registry-operator/8cb76b8e not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/11a00840 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/ec355a92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/containers/catalog-operator/992f735e not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..2025_02_23_05_22_30.1782968797/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d59cdbbc not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/72133ff0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/c56c834c not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/d13724c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/containers/openshift-controller-manager-operator/0a498258 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c14 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa471982 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fc900d92 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/containers/machine-config-server/fa7d68da not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/4bacf9b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/424021b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/migrator/fc2e31a3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/f51eefac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/c8997f2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/containers/graceful-termination/7481f599 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..2025_02_23_05_22_49.2255460704/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes/kubernetes.io~configmap/signing-cabundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/fdafea19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/d0e1c571 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/ee398915 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/containers/service-ca-controller/682bb6b8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a3e67855 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/a989f289 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/setup/915431bd not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/7796fdab not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/dcdb5f19 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-ensure-env-vars/a3aaa88c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/5508e3e6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/160585de not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-resources-copy/e99f8da3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/8bc85570 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/a5861c91 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcdctl/84db1135 not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/9e1a6043 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/c1aba1c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd/d55ccd6d not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/971cc9f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/8f2e3dcf not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-metrics/ceb35e9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/1c192745 not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/5209e501 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-readyz/f83de4df not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/e7b978ac not reset as customized by admin to system_u:object_r:container_file_t:s0:c294,c884 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/c64304a1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c1016 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/2139d3e2895fc6797b9c76a1b4c9886d/containers/etcd-rev/5384386b not reset as customized by admin to system_u:object_r:container_file_t:s0:c666,c920 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/cce3e3ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/multus-admission-controller/8fb75465 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/740f573e not reset as customized by admin to system_u:object_r:container_file_t:s0:c435,c756 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/containers/kube-rbac-proxy/32fd1134 not reset as customized by admin to system_u:object_r:container_file_t:s0:c268,c620 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/0a861bd3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/80363026 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/containers/serve-healthcheck-canary/bfa952a8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c19,c24 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..2025_02_23_05_33_31.2122464563/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..2025_02_23_05_33_31.333075221 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/793bf43d not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/7db1bb6e not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/kube-rbac-proxy/4f6a0368 not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/c12c7d86 not reset as customized by admin to system_u:object_r:container_file_t:s0:c381,c387 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/36c4a773 not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/4c1e98ae not reset as customized by admin to system_u:object_r:container_file_t:s0:c142,c438 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/containers/machine-approver-controller/a4c8115c not reset as customized by admin to system_u:object_r:container_file_t:s0:c129,c158 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/setup/7db1802e not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver/a008a7ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-syncer/2c836bac not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-cert-regeneration-controller/0ce62299 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-insecure-readyz/945d2457 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/containers/kube-apiserver-check-endpoints/7d5c1dd8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c97,c980 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/advanced-cluster-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-broker-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq-streams-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amq7-interconnect-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-automation-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ansible-cloud-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry-3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bamoe-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/index.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/businessautomation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cephcsi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cincinnati-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-kube-descheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/compliance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/container-security-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/costmanagement-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cryostat-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datagrid/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devspaces/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devworkspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dpu-network-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eap/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/file-integrity-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-console/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fuse-online/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gatekeeper-operator-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jws-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kernel-module-management-hub/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kiali-ossm/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logic-operator-rhel8/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lvms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mcg-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mta-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mtv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-client-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-csi-addons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-multicluster-orchestrator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odf-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odr-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/bundle-v1.15.0.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/channel.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-cert-manager-operator/package.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-custom-metrics-autoscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-pipelines-operator-rh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-secondary-scheduler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-bridge-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/quay-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/recipe/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/red-hat-hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redhat-oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rh-service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhacs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhbk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhdh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhods-prometheus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhpam-kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhsso-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rook-ceph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/run-once-duration-override-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sandboxed-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/security-profiles-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/serverless-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-registry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/servicemeshoperator3/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/submariner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tang-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustee-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volsync-product/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/catalog/web-terminal/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/bc8d0691 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/6b76097a not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-utilities/34d1af30 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/312ba61c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/645d5dd1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/extract-content/16e825f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/4cf51fc9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/2a23d348 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/containers/registry-server/075dbd49 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/..2025_02_24_06_09_13.3521195566/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes/kubernetes.io~configmap/serviceca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/dd585ddd not reset as customized by admin to system_u:object_r:container_file_t:s0:c377,c642 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/17ebd0ab not reset as customized by admin to system_u:object_r:container_file_t:s0:c338,c343 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/containers/node-ca/005579f4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c842,c986 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_23_05_23_11.449897510/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_23_05_23_11.1287037894 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..2025_02_23_05_23_11.1301053334/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes/kubernetes.io~configmap/audit-policies/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/bf5f3b9c not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/af276eb7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/fix-audit-permissions/ea28e322 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/692e6683 not reset as customized by admin to system_u:object_r:container_file_t:s0:c49,c263 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/871746a7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c701 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/containers/oauth-apiserver/4eb2e958 not reset as customized by admin to system_u:object_r:container_file_t:s0:c764,c897 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..2025_02_24_06_09_06.2875086261/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/console-config/console-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_09_06.286118152/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..2025_02_24_06_09_06.3865795478/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/oauth-serving-cert/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..2025_02_24_06_09_06.584414814/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/ca9b62da not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/containers/console/0edd6fce not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.2406383837/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.openshift-global-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/config/openshift-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.1071801880/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877 not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..2025_02_24_06_20_07.2494444877/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes/kubernetes.io~configmap/proxy-ca-bundles/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/containers/controller-manager/89b4555f not reset as customized by admin to system_u:object_r:container_file_t:s0:c14,c22 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..2025_02_23_05_23_22.4071100442/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes/kubernetes.io~configmap/config-volume/Corefile not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/655fcd71 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/0d43c002 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/dns/e68efd17 not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/9acf9b65 not reset as customized by admin to system_u:object_r:container_file_t:s0:c457,c841 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/5ae3ff11 not reset as customized by admin to system_u:object_r:container_file_t:s0:c55,c1022 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/containers/kube-rbac-proxy/1e59206a not reset as customized by admin to system_u:object_r:container_file_t:s0:c466,c972 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/27af16d1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c304,c1017 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/7918e729 not reset as customized by admin to system_u:object_r:container_file_t:s0:c853,c893 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/containers/dns-node-resolver/5d976d0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c585,c981 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..2025_02_23_05_38_56.1112187283/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/config/controller-config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_23_05_38_56.2839772658/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes/kubernetes.io~configmap/trusted-ca/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/d7f55cbb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/f0812073 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/1a56cbeb not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/7fdd437e not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/containers/console-operator/cdfb5652 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c25 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..2025_02_24_06_17_29.3844392896/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/etcd-serving-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..2025_02_24_06_17_29.848549803/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..2025_02_24_06_17_29.780046231/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/audit/policy.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..2025_02_24_06_17_29.2926008347/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/image-import-ca/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..2025_02_24_06_17_29.2729721485/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes/kubernetes.io~configmap/trusted-ca-bundle/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/fix-audit-permissions/fb93119e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver/f1e8fc0e not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/containers/openshift-apiserver-check-endpoints/218511f3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c336,c787 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes/kubernetes.io~empty-dir/tmpfs/k8s-webhook-server/serving-certs not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/ca8af7b3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/72cc8a75 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/containers/packageserver/6e8a3760 not reset as customized by admin to system_u:object_r:container_file_t:s0:c12,c18 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..2025_02_23_05_27_30.557428972/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes/kubernetes.io~configmap/service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4c3455c0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/2278acb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/4b453e4f not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/containers/cluster-version-operator/3ec09bda not reset as customized by admin to system_u:object_r:container_file_t:s0:c5,c6 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..2025_02_24_06_25_03.422633132/anchors/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/trusted-ca/anchors not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..2025_02_24_06_25_03.3594477318/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/image-registry.openshift-image-registry.svc.cluster.local..5000 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~configmap/registry-certificates/default-route-openshift-image-registry.apps-crc.testing not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/edk2/cacerts.bin not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/java/cacerts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/openssl/ca-bundle.trust.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/tls-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/email-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/objsign-ca-bundle.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2ae6433e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fde84897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75680d2e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/openshift-service-serving-signer_1740288168.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/facfc4fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f5a969c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CFCA_EV_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9ef4a08a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ingress-operator_1740288202.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2f332aed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/248c8271.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d10a21f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ACCVRAIZ1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a94d09e5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c9a4d3b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40193066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd8c0d63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b936d1c6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CA_Disig_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4fd49c6c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AC_RAIZ_FNMT-RCM_SERVIDORES_SEGUROS.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b81b93f0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f9a69fa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b30d5fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ANF_Secure_Server_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b433981b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93851c9e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9282e51c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7dd1bc4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Actalis_Authentication_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/930ac5d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f47b495.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e113c810.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5931b5bc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Commercial.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2b349938.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e48193cf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/302904dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a716d4ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Networking.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/93bc0acc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/86212b19.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certigna_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b727005e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbc54cab.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f51bb24c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c28a8a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AffirmTrust_Premium_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9c8dfbd4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ccc52f49.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cb1c3204.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ce5e74ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd08c599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6d41d539.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb5fa911.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e35234b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8cb5ee0f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a7c655d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f8fc53da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Amazon_Root_CA_4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/de6d66f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d41b5e2a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/41a3f684.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1df5a75f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_2011.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e36a6752.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b872f2b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9576d26b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/228f89db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_ECC_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fb717492.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d21b73c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b1b94ef.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/595e996b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Atos_TrustedRoot_Root_CA_RSA_TLS_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b46e03d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/128f4b91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_3_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81f2d2b1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Autoridad_de_Certificacion_Firmaprofesional_CIF_A62634068.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3bde41ac.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d16a5865.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_EC-384_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0179095f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ffa7f1eb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9482e63a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4dae3dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/BJCA_Global_Root_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e359ba6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7e067d03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/95aff9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7746a63.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Baltimore_CyberTrust_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/653b494a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3ad48a91.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Buypass_Class_2_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/54657681.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/82223c44.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8de2f56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2d9dafe4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d96b65e2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee64a828.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/40547a79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5a3f0ff8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a780d93.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/34d996fb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/eed8c118.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/89c02a45.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b1159c4c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/COMODO_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d6325660.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d4c339cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8312c4c1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certainly_Root_E1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8508e720.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5fdd185d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48bec511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/69105f4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0b9bc432.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Certum_Trusted_Network_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/32888f65.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b03dec0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/219d9499.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_ECC_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5acf816d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbf06781.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-01.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc99f41e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/CommScope_Public_Trust_RSA_Root-02.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/AAA_Certificate_Services.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/985c1f52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8794b4e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_BR_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e7c037b4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ef954a4e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_EV_Root_CA_1_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2add47b6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/90c5a3c8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0f3e76e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/53a1b57a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/D-TRUST_Root_Class_3_CA_2_EV_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5ad8a5d6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/68dd7389.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d04f354.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d6437c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/062cdee6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bd43e1dd.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Assured_ID_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7f3d5d1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c491639e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3513523f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/399e7759.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/feffd413.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d18e9066.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/607986c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c90bc37d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1b0f7e5c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e08bfd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Global_Root_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dd8e9d41.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed39abd0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a3418fda.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bc3f2570.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_High_Assurance_EV_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/244b5494.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/81b9768f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4be590e0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_ECC_P384_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9846683b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/252252d2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e8e7201.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_TLS_RSA4096_Root_G5.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d52c538d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c44cc0c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/DigiCert_Trusted_Root_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/75d1b2ed.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a2c66da8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ecccd8db.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust.net_Certification_Authority__2048_.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/aee5f10d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3e7271e8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0e59380.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4c3982f2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b99d060.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf64f35b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0a775a30.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/002c0b4f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cc450945.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_EC1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/106f3e4d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b3fb433b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GlobalSign.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4042bcee.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/02265526.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/455f1b52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0d69c7e1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9f727ac7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Entrust_Root_Certification_Authority_-_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5e98733a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0cd152c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dc4d6a89.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6187b673.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/FIRMAPROFESIONAL_CA_ROOT-A_WEB.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ba8887ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/068570d1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f081611a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/48a195d8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GDCA_TrustAUTH_R5_ROOT.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f6fa695.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab59055e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b92fd57f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GLOBALTRUST_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fa5da96b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ec40989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7719f463.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/GTS_Root_R1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1001acf7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f013ecaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/626dceaf.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c559d742.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1d3472b9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9479c8c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a81e292b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4bfab552.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_E46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Go_Daddy_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e071171e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/57bcb2da.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_ECC_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ab5346f4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5046c355.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HARICA_TLS_RSA_Root_CA_2021.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/865fbdf9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da0cfd1d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/85cde254.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_ECC_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cbb3f32b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureSign_RootCA11.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hellenic_Academic_and_Research_Institutions_RootCA_2015.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5860aaa6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/31188b5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/HiPKI_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c7f1359b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f15c80c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Hongkong_Post_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/09789157.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ISRG_Root_X2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/18856ac4.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e09d511.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Commercial_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cf701eeb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d06393bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/IdenTrust_Public_Sector_Root_CA_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/10531352.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Izenpe.com.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SecureTrust_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b0ed035a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsec_e-Szigno_Root_CA_2009.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8160b96c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e8651083.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2c63f966.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_ECC_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d89cda1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/01419da9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_RSA_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7a5b843.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Microsoft_RSA_Root_Certificate_Authority_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bf53fb88.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9591a472.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3afde786.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Gold_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NAVER_Global_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3fb36b73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d39b0a2c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a89d74c2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/cd58d51e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b7db1890.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/NetLock_Arany__Class_Gold__F__tan__s__tv__ny.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/988a38cb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/60afe812.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f39fc864.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5443e9e3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GB_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e73d606e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dfc0fe80.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b66938e9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1e1eab7c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/OISTE_WISeKey_Global_Root_GC_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/773e07ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c899c73.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d59297b8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ddcda989.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_1_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/749e9e03.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/52b525c7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_RootCA3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d7e8dc79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a819ef2.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/08063a00.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6b483515.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_2_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/064e0aa9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1f58a078.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6f7454b3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7fa05551.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76faf6c0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9339512a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f387163d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee37c333.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/QuoVadis_Root_CA_3_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e18bfb83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e442e424.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fe8a2cd8.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/23f4c490.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5cd81ad7.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f0c70a8d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7892ad52.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SZAFIR_ROOT_CA2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4f316efb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_EV_Root_Certification_Authority_RSA_R2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/06dc52d5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/583d0756.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Sectigo_Public_Server_Authentication_Root_R46.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_ECC.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0bf05006.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/88950faa.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9046744a.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/3c860d51.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_Root_Certification_Authority_RSA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/6fa5da56.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/33ee480d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Secure_Global_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/63a2c897.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SSL.com_TLS_ECC_Root_CA_2022.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/bdacca6f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ff34af3f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/dbff3a01.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Security_Communication_ECC_RootCA1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_C1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Class_2_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/406c9bb1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_C3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Starfield_Services_Root_Certificate_Authority_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/SwissSign_Silver_CA_-_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/99e1b953.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/T-TeleSec_GlobalRoot_Class_3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/14bc7599.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TUBITAK_Kamu_SM_SSL_Kok_Sertifikasi_-_Surum_1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Global_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/7a3adc42.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TWCA_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f459871d.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_ECC_Root_2020.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_Root_CA_-_G1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telekom_Security_TLS_RSA_Root_2023.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TeliaSonera_Root_CA_v1.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Telia_Root_CA_v2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8f103249.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f058632f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-certificates.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9bf03295.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/98aaf404.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TrustAsia_Global_Root_CA_G4.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1cef98f5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/073bfcc5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/2923b3f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f249de83.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/edcbddb5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/emSign_ECC_Root_CA_-_G3.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P256_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9b5697b0.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/1ae85e5e.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/b74d2bd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/Trustwave_Global_ECC_P384_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/d887a5bb.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9aef356c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/TunTrust_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fd64f3fc.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e13665f9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Extended_Validation_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/0f5dc4f3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/da7377f6.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/UCA_Global_G2_Root.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/c01eb047.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/304d27c3.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ed858448.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_ECC_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/f30dd6ad.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/04f60c28.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/vTrus_ECC_Root_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/USERTrust_RSA_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/fc5a8f99.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/35105088.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ee532fd5.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/XRamp_Global_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/706f604c.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/76579174.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/8d86cdd1.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/882de061.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/certSIGN_ROOT_CA_G2.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/5f618aec.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/a9d40e02.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e-Szigno_Root_CA_2017.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/e868b802.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/83e9984f.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ePKI_Root_Certification_Authority.pem not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/ca6e4ad9.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/9d6523ce.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/4b718d9b.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes/kubernetes.io~empty-dir/ca-trust-extracted/pem/directory-hash/869fbf79.0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/containers/registry/f8d22bdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c10,c16 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/6e8bbfac not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/54dd7996 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator/a4f1bb05 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/207129da not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/c1df39e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/containers/cluster-samples-operator-watch/15b8f1cd not reset as customized by admin to system_u:object_r:container_file_t:s0:c9,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3523263858/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..2025_02_23_05_27_49.3256605594/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes/kubernetes.io~configmap/images/images.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/77bd6913 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/2382c1b1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/kube-rbac-proxy/704ce128 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/70d16fe0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/bfb95535 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/containers/machine-api-operator/57a8e8e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c0,c15 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..2025_02_23_05_27_49.3413793711/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/1b9d3e5e not reset as customized by admin to system_u:object_r:container_file_t:s0:c107,c917 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/fddb173c not reset as customized by admin to system_u:object_r:container_file_t:s0:c202,c983 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/containers/kube-apiserver-operator/95d3c6c4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c219,c404 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/bfb5fff5 not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/2aef40aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/9d751cbb-f2e2-430d-9754-c882a5e924a5/containers/check-endpoints/c0391cad not reset as customized by admin to system_u:object_r:container_file_t:s0:c20,c21 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/1119e69d not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/660608b4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager/8220bd53 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/85f99d5c not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/cluster-policy-controller/4b0225f6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/9c2a3394 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-cert-syncer/e820b243 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/1ca52ea0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c776,c1007 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/f614b9022728cf315e60c057852e563e/containers/kube-controller-manager-recovery-controller/e6988e45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c214,c928 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes/kubernetes.io~configmap/mcc-auth-proxy-config/..2025_02_24_06_09_21.2517297950/config-file.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/6655f00b not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/98bc3986 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/machine-config-controller/08e3458a not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/2a191cb0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/6c4eeefb not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/containers/kube-rbac-proxy/f61a549c not reset as customized by admin to system_u:object_r:container_file_t:s0:c4,c17 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/24891863 not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/hostpath-provisioner/fbdfd89c not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/9b63b3bc not reset as customized by admin to system_u:object_r:container_file_t:s0:c37,c572 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/liveness-probe/8acde6d6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/node-driver-registrar/59ecbba3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/containers/csi-provisioner/685d4be3 not reset as customized by admin to system_u:object_r:container_file_t:s0:c318,c553 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..2025_02_24_06_20_07.341639300/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/config.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.client-ca.configmap not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/config/openshift-route-controller-manager.serving-cert.secret not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851 not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..2025_02_24_06_20_07.2950937851/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes/kubernetes.io~configmap/client-ca/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/containers/route-controller-manager/feaea55e not reset as customized by admin to system_u:object_r:container_file_t:s0:c2,c23 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abinitio-runtime-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/accuknox-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aci-containers-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airlock-microgateway/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ako-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloy/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anchore-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-cloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/appdynamics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-dcap-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ccm-node-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cfm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cilium-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloud-native-postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudera-streams-messaging-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudnative-pg/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cnfv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/conjur-follower-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/coroot-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cte-k8s-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-deploy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/digitalai-release-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edb-hcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/elasticsearch-eck-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/federatorai-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fujitsu-enterprise-postgres-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/function-mesh/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/harness-gitops-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hcp-terraform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hpe-ezmeral-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-application-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-directory-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-dr-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-licensing-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infoscale-sds-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infrastructure-asset-orchestrator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-device-plugins-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/intel-kubernetes-power-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-openshift-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8s-triliovault/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-ati-updates/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-framework/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-ingress/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-licensing/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-kcos-sso/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-load-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-loadcore-agents/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nats-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-nimbusmosaic-dusim/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-rest-api-browser-v1/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-appsec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-db/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-diagnostics/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-logging/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-migration/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-msg-broker/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-notifications/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-stats-dashboards/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-storage/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-test-core/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-wap-ui/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keysight-websocket-service/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kong-gateway-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubearmor-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lenovo-locd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memcached-operator-ogaye/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/memory-machine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-enterprise/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netapp-spark-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-adm-agent-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netscaler-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-repository-ha-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nginx-ingress-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nim-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxiq-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nxrm-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odigos-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/open-liberty-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftartifactoryha-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshiftxray-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/operator-certification-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pmem-csi-operator-os/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-component-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/runtime-fabric-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sanstoragecsi-operator-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/smilecdr-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sriov-fec/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-commons-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stackable-zookeeper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-tsc-client-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tawon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tigera-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vcp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/webotx-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/63709497 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/d966b7fd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-utilities/f5773757 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/81c9edb9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/57bf57ee not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/extract-content/86f5e6aa not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/0aabe31d not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/d2af85c2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/containers/registry-server/09d157d9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/3scale-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-acmpca-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigateway-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-apigatewayv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-applicationautoscaling-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-athena-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudfront-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudtrail-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatch-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-cloudwatchlogs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-documentdb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-dynamodb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ec2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecr-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ecs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-efs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eks-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elasticache-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-elbv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-emrcontainers-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-eventbridge-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-iam-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kafka-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-keyspaces-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kinesis-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-kms-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-lambda-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-memorydb-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-mq-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-networkfirewall-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-opensearchservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-organizations-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-pipes-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-prometheusservice-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-rds-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-recyclebin-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-route53resolver-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-s3-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sagemaker-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-secretsmanager-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ses-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sfn-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sns-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-sqs-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-ssm-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ack-wafv2-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/airflow-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alloydb-omni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/alvearie-imaging-ingestion/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/amd-gpu-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/analytics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/annotationlab/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicast-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-api-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurio-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apicurito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/apimatic-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/application-services-metering-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aqua/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/argocd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/assisted-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/authorino-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/automotive-infra/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aws-efs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/awss3-operator-registry/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/azure-service-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/beegfs-csi-driver-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/bpfman-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-k/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/camel-karavan-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cass-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cert-utils-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-aas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-impairment-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cluster-manager/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/codeflare-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-kubevirt-hyperconverged/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-trivy-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/community-windows-machine-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/customized-user-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cxl-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dapr-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datatrucker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dbaas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/debezium-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dell-csm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/deployment-validation-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/devopsinabox/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-amlen-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eclipse-che/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ecr-secret-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/edp-keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eginnovations-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/egressip-ipam-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ember-csi-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/etcd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/eventing-kogito/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/external-secrets-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/falcon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fence-agents-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flink-kubernetes-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k8gb/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/fossul-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/github-arc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitops-primer/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/gitwebhook-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/global-load-balancer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/grafana-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/group-sync-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hawtio-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hazelcast-platform-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hedvig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hive-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/horreum-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/hyperfoil-bundle/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-block-csi-operator-community/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-security-verify-access-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibm-spectrum-scale-csi-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ibmcloud-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/infinispan/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/integrity-shield-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ipfs-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/istio-workspace-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/jaeger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kaoto-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keda/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keepalived-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/keycloak-permissions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/klusterlet/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kogito-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/koku-metrics-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/konveyor-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/korrel8r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kuadrant-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kube-green/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubecost/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubernetes-imagepuller-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/l5-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/layer7-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lbconfig-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/lib-bucket-provisioner/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/limitador-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/logging-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/loki-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/machine-deletion-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mariadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marin3r/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mercury-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/microcks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-atlas-kubernetes/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/mongodb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/move2kube-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multi-nic-cni-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-global-hub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/multicluster-operators-subscription/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/must-gather-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/namespace-configuration-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ncn-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ndmspc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/netobserv-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-community-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nexus-operator-m88i/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nfs-provisioner-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nlp-server/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-discovery-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-healthcheck-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/node-maintenance-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/nsm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oadp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/observability-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/oci-ccm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ocm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/odoo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opendatahub-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openebs/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-nfd-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-node-upgrade-mutex-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/openshift-qiskit-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/opentelemetry-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patch-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/patterns-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pcc-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pelorus-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/percona-xtradb-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/portworx-essentials/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/postgresql/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/proactive-node-scaling-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/project-quay/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometheus-exporter-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/prometurbo/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pubsubplus-eventbroker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pulp-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-cluster-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rabbitmq-messaging-topology-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/reportportal-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/resource-locker-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/rhoas-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ripsaw/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sailoperator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-commerce-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-data-intelligence-observer-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sap-hana-express-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/self-node-remediation/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/service-binding-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/shipwright-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sigstore-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/silicom-sts-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/skupper-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snapscheduler/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/snyk-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/socmmd/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonar-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosivio/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sonataflow-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/sosreport-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/spark-helm-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/special-resource-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/stolostron-engine/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/strimzi-kafka-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/syndesis/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tagger/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tempo-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tf-controller/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/tidb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trident-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/trustify-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ucs-ci-solutions-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/universal-crossplane/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/varnish-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vault-config-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/verticadb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/volume-expander-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/wandb-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/windup-operator/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yaks/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c0fe7256 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/c30319e4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-utilities/e6b1dd45 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/2bb643f0 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/920de426 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/extract-content/70fa1e87 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/a1c12a2f not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/9442e6c7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/containers/registry-server/5b45ec72 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/abot-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aerospike-kubernetes-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/aikit-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzo-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzograph-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/anzounstructured-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cloudbees-ci-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/cockroachdb-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/crunchy-postgres-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/datadog-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/dynatrace-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/entando-k8s-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/flux/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/instana-agent-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/iomesh-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/joget-dx8-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/k10-kasten-operator-term-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubemq-operator-marketplace-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/kubeturbo-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/linstor-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/marketplace-games-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/model-builder-for-vision-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/neuvector-certified-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/ovms-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/pachyderm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/redis-enterprise-operator-cert-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/seldon-deploy-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-paygo-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/starburst-enterprise-helm-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/t8c-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/timemachine-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/vfunction-server-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/xcrypt-operator-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/yugabyte-platform-operator-bundle-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/catalog/zabbix-operator-certified-rhmp/catalog.json not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/00000-1.psg.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/db.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/index.pmt not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/main.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/db/overflow.pix not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/catalog-content/cache/pogreb.v1/digest not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes/kubernetes.io~empty-dir/utilities/copy-content not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/3c9f3a59 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/1091c11b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-utilities/9a6821c6 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/ec0c35e2 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/517f37e7 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/extract-content/6214fe78 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/ba189c8b not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/351e4f31 not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/containers/registry-server/c0f219ff not reset as customized by admin to system_u:object_r:container_file_t:s0:c7,c13 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/8069f607 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/559c3d82 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/wait-for-host-port/605ad488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/148df488 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/3bf6dcb4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler/022a2feb not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/938c3924 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/729fe23e not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-cert-syncer/1fd5cbd4 not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/a96697e1 not reset as customized by admin to system_u:object_r:container_file_t:s0:c378,c723 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/e155ddca not reset as customized by admin to system_u:object_r:container_file_t:s0:c133,c223 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/3dcd261975c3d6b9a6ad6367fd4facd3/containers/kube-scheduler-recovery-controller/10dd0e0f not reset as customized by admin to system_u:object_r:container_file_t:s0:c247,c522 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..2025_02_24_06_09_35.3018472960/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-trusted-ca-bundle/ca-bundle.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..2025_02_24_06_09_35.4262376737/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/audit-policies/audit.yaml not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..2025_02_24_06_09_35.2630275752/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-cliconfig/v4-0-config-system-cliconfig not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..2025_02_24_06_09_35.2376963788/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/..data not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes/kubernetes.io~configmap/v4-0-config-system-service-ca/service-ca.crt not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/etc-hosts not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/6f2c8392 not reset as customized by admin to system_u:object_r:container_file_t:s0:c267,c588 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/containers/oauth-openshift/bd241ad9 not reset as customized by admin to system_u:object_r:container_file_t:s0:c682,c947 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/csi-hostpath/csi.sock not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983 not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/vol_data.json not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: /var/lib/kubelet/plugins_registry not reset as customized by admin to system_u:object_r:container_file_t:s0 Jan 20 11:04:31 crc restorecon[4689]: Relabeled /var/usrlocal/bin/kubenswrapper from system_u:object_r:bin_t:s0 to system_u:object_r:kubelet_exec_t:s0 Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --pod-infra-container-image has been deprecated, will be removed in a future release. Image garbage collector will get sandbox image information from CRI. Jan 20 11:04:32 crc kubenswrapper[4725]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.739823 4725 server.go:211] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743580 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743601 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743605 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743610 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743615 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743620 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743624 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743632 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743637 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743641 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743649 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743653 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743656 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743665 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743669 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743672 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743676 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743680 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743684 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743688 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743692 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743696 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743699 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743703 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743707 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743714 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743718 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743721 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743725 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743729 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743733 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743738 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743742 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743748 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743754 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743759 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743762 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743769 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743773 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743777 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743781 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743784 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743788 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743792 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743900 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743905 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743910 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743914 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743918 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743921 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743925 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743932 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743936 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743941 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743950 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743962 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743968 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743974 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743979 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743983 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743988 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743992 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.743996 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744005 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744011 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744016 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744025 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744031 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744035 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744041 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.744045 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744407 4725 flags.go:64] FLAG: --address="0.0.0.0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744510 4725 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744537 4725 flags.go:64] FLAG: --anonymous-auth="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744546 4725 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744557 4725 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744563 4725 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744573 4725 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744588 4725 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744593 4725 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744597 4725 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744603 4725 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744609 4725 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744615 4725 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744620 4725 flags.go:64] FLAG: --cgroup-root="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744625 4725 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744630 4725 flags.go:64] FLAG: --client-ca-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744640 4725 flags.go:64] FLAG: --cloud-config="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744652 4725 flags.go:64] FLAG: --cloud-provider="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744660 4725 flags.go:64] FLAG: --cluster-dns="[]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744675 4725 flags.go:64] FLAG: --cluster-domain="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744681 4725 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744686 4725 flags.go:64] FLAG: --config-dir="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744691 4725 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744696 4725 flags.go:64] FLAG: --container-log-max-files="5" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744704 4725 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744709 4725 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744716 4725 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744727 4725 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744740 4725 flags.go:64] FLAG: --contention-profiling="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744745 4725 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744970 4725 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744987 4725 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.744994 4725 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745004 4725 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745009 4725 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745013 4725 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745017 4725 flags.go:64] FLAG: --enable-load-reader="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745023 4725 flags.go:64] FLAG: --enable-server="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745027 4725 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745041 4725 flags.go:64] FLAG: --event-burst="100" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745046 4725 flags.go:64] FLAG: --event-qps="50" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745050 4725 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745055 4725 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745060 4725 flags.go:64] FLAG: --eviction-hard="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745102 4725 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745107 4725 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745111 4725 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745117 4725 flags.go:64] FLAG: --eviction-soft="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745121 4725 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745127 4725 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745132 4725 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745138 4725 flags.go:64] FLAG: --experimental-mounter-path="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745143 4725 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745148 4725 flags.go:64] FLAG: --fail-swap-on="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745152 4725 flags.go:64] FLAG: --feature-gates="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745159 4725 flags.go:64] FLAG: --file-check-frequency="20s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745163 4725 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745169 4725 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745175 4725 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745180 4725 flags.go:64] FLAG: --healthz-port="10248" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745185 4725 flags.go:64] FLAG: --help="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745190 4725 flags.go:64] FLAG: --hostname-override="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745195 4725 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745202 4725 flags.go:64] FLAG: --http-check-frequency="20s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745209 4725 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745214 4725 flags.go:64] FLAG: --image-credential-provider-config="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745218 4725 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745223 4725 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745227 4725 flags.go:64] FLAG: --image-service-endpoint="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745231 4725 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745236 4725 flags.go:64] FLAG: --kube-api-burst="100" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745240 4725 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745245 4725 flags.go:64] FLAG: --kube-api-qps="50" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745249 4725 flags.go:64] FLAG: --kube-reserved="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745254 4725 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745258 4725 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745263 4725 flags.go:64] FLAG: --kubelet-cgroups="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745267 4725 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745272 4725 flags.go:64] FLAG: --lock-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745276 4725 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745280 4725 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745285 4725 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745295 4725 flags.go:64] FLAG: --log-json-split-stream="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745309 4725 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745314 4725 flags.go:64] FLAG: --log-text-split-stream="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745326 4725 flags.go:64] FLAG: --logging-format="text" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745337 4725 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745344 4725 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745350 4725 flags.go:64] FLAG: --manifest-url="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745356 4725 flags.go:64] FLAG: --manifest-url-header="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745368 4725 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745373 4725 flags.go:64] FLAG: --max-open-files="1000000" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745379 4725 flags.go:64] FLAG: --max-pods="110" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745384 4725 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745389 4725 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745394 4725 flags.go:64] FLAG: --memory-manager-policy="None" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745399 4725 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745404 4725 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745408 4725 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745413 4725 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhcos" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745434 4725 flags.go:64] FLAG: --node-status-max-images="50" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745439 4725 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745443 4725 flags.go:64] FLAG: --oom-score-adj="-999" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745450 4725 flags.go:64] FLAG: --pod-cidr="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745455 4725 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:33549946e22a9ffa738fd94b1345f90921bc8f92fa6137784cb33c77ad806f9d" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745463 4725 flags.go:64] FLAG: --pod-manifest-path="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745468 4725 flags.go:64] FLAG: --pod-max-pids="-1" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745472 4725 flags.go:64] FLAG: --pods-per-core="0" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745477 4725 flags.go:64] FLAG: --port="10250" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745482 4725 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745486 4725 flags.go:64] FLAG: --provider-id="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745490 4725 flags.go:64] FLAG: --qos-reserved="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745495 4725 flags.go:64] FLAG: --read-only-port="10255" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745500 4725 flags.go:64] FLAG: --register-node="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745504 4725 flags.go:64] FLAG: --register-schedulable="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745508 4725 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745519 4725 flags.go:64] FLAG: --registry-burst="10" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745523 4725 flags.go:64] FLAG: --registry-qps="5" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745528 4725 flags.go:64] FLAG: --reserved-cpus="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745534 4725 flags.go:64] FLAG: --reserved-memory="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745541 4725 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745546 4725 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745551 4725 flags.go:64] FLAG: --rotate-certificates="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745555 4725 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745559 4725 flags.go:64] FLAG: --runonce="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745564 4725 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745569 4725 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745574 4725 flags.go:64] FLAG: --seccomp-default="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745579 4725 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745584 4725 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745590 4725 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745596 4725 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745601 4725 flags.go:64] FLAG: --storage-driver-password="root" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745607 4725 flags.go:64] FLAG: --storage-driver-secure="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745611 4725 flags.go:64] FLAG: --storage-driver-table="stats" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745616 4725 flags.go:64] FLAG: --storage-driver-user="root" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745621 4725 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745626 4725 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745630 4725 flags.go:64] FLAG: --system-cgroups="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745635 4725 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745643 4725 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745648 4725 flags.go:64] FLAG: --tls-cert-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745652 4725 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745660 4725 flags.go:64] FLAG: --tls-min-version="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745665 4725 flags.go:64] FLAG: --tls-private-key-file="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745669 4725 flags.go:64] FLAG: --topology-manager-policy="none" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745673 4725 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745677 4725 flags.go:64] FLAG: --topology-manager-scope="container" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745683 4725 flags.go:64] FLAG: --v="2" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745691 4725 flags.go:64] FLAG: --version="false" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745699 4725 flags.go:64] FLAG: --vmodule="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745705 4725 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.745709 4725 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745914 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745921 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745927 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745932 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745939 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745943 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745949 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745955 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745960 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745965 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745969 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745973 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745977 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745981 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745985 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745988 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745993 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.745997 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746001 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746004 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746009 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746013 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746016 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746020 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746024 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746028 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746031 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746035 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746040 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746044 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746047 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746051 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746055 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746058 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746062 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746065 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746069 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746073 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746094 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746099 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746102 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746106 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746110 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746114 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746151 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746155 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746160 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746165 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746169 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746174 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746178 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746182 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746186 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746191 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746195 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746199 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746203 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746206 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746210 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746214 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746219 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746223 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746227 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746232 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746238 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746244 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746249 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746254 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746261 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746266 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.746282 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.746300 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.755942 4725 server.go:491] "Kubelet version" kubeletVersion="v1.31.5" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.755991 4725 server.go:493] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756114 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756130 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756139 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756145 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756153 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756158 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756163 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756169 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756174 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756180 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756185 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756190 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756195 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756200 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756206 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756211 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756216 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756222 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756228 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756234 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756240 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756247 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756253 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756258 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756263 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756269 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756274 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756279 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756284 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756289 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756295 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756300 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756305 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756310 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756317 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756323 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756328 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756335 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756343 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756349 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756356 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756361 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756367 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756373 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756378 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756384 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756390 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756397 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756404 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756409 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756415 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756421 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756426 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756432 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756437 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756443 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756448 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756454 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756459 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756465 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756472 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756478 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756483 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756488 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756493 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756499 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756506 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756512 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756517 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756523 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756530 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.756540 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756710 4725 feature_gate.go:330] unrecognized feature gate: NetworkSegmentation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756718 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756724 4725 feature_gate.go:330] unrecognized feature gate: OnClusterBuild Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756729 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIProviderOpenStack Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756735 4725 feature_gate.go:330] unrecognized feature gate: OVNObservability Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756740 4725 feature_gate.go:330] unrecognized feature gate: ExternalOIDC Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756747 4725 feature_gate.go:353] Setting GA feature gate CloudDualStackNodeIPs=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756754 4725 feature_gate.go:330] unrecognized feature gate: AzureWorkloadIdentity Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756760 4725 feature_gate.go:330] unrecognized feature gate: BuildCSIVolumes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756766 4725 feature_gate.go:330] unrecognized feature gate: ClusterMonitoringConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756772 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756778 4725 feature_gate.go:330] unrecognized feature gate: PersistentIPsForVirtualization Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756785 4725 feature_gate.go:330] unrecognized feature gate: GCPClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756791 4725 feature_gate.go:330] unrecognized feature gate: NutanixMultiSubnets Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756797 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImagesAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756805 4725 feature_gate.go:353] Setting GA feature gate ValidatingAdmissionPolicy=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756812 4725 feature_gate.go:330] unrecognized feature gate: BareMetalLoadBalancer Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756818 4725 feature_gate.go:330] unrecognized feature gate: PlatformOperators Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756824 4725 feature_gate.go:330] unrecognized feature gate: SetEIPForNLBIngressController Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756830 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfigAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756836 4725 feature_gate.go:330] unrecognized feature gate: UpgradeStatus Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756842 4725 feature_gate.go:330] unrecognized feature gate: InsightsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756848 4725 feature_gate.go:330] unrecognized feature gate: GCPLabelsTags Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756854 4725 feature_gate.go:330] unrecognized feature gate: MetricsCollectionProfiles Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756860 4725 feature_gate.go:330] unrecognized feature gate: VSphereStaticIPs Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756866 4725 feature_gate.go:330] unrecognized feature gate: AdditionalRoutingCapabilities Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756871 4725 feature_gate.go:330] unrecognized feature gate: AlibabaPlatform Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756877 4725 feature_gate.go:330] unrecognized feature gate: InsightsOnDemandDataGather Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756885 4725 feature_gate.go:353] Setting GA feature gate DisableKubeletCloudCredentialProviders=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756891 4725 feature_gate.go:330] unrecognized feature gate: AutomatedEtcdBackup Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756897 4725 feature_gate.go:330] unrecognized feature gate: MixedCPUsAllocation Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756902 4725 feature_gate.go:330] unrecognized feature gate: SignatureStores Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756908 4725 feature_gate.go:330] unrecognized feature gate: NetworkLiveMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756913 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstall Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756919 4725 feature_gate.go:330] unrecognized feature gate: ManagedBootImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756924 4725 feature_gate.go:330] unrecognized feature gate: ImageStreamImportMode Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756930 4725 feature_gate.go:330] unrecognized feature gate: Example Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756935 4725 feature_gate.go:330] unrecognized feature gate: EtcdBackendQuota Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756941 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiNetworks Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756946 4725 feature_gate.go:330] unrecognized feature gate: InsightsRuntimeExtractor Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756951 4725 feature_gate.go:330] unrecognized feature gate: PrivateHostedZoneAWS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756957 4725 feature_gate.go:330] unrecognized feature gate: AWSClusterHostedDNS Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756962 4725 feature_gate.go:330] unrecognized feature gate: NetworkDiagnosticsConfig Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756967 4725 feature_gate.go:330] unrecognized feature gate: VSphereControlPlaneMachineSet Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756973 4725 feature_gate.go:330] unrecognized feature gate: VSphereDriverConfiguration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756981 4725 feature_gate.go:330] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756986 4725 feature_gate.go:330] unrecognized feature gate: MinimumKubeletVersion Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756992 4725 feature_gate.go:330] unrecognized feature gate: PinnedImages Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.756997 4725 feature_gate.go:330] unrecognized feature gate: HardwareSpeed Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757002 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAzure Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757008 4725 feature_gate.go:330] unrecognized feature gate: VolumeGroupSnapshot Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757013 4725 feature_gate.go:330] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757019 4725 feature_gate.go:330] unrecognized feature gate: NodeDisruptionPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757024 4725 feature_gate.go:330] unrecognized feature gate: BootcNodeManagement Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757029 4725 feature_gate.go:330] unrecognized feature gate: MachineAPIMigration Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757034 4725 feature_gate.go:330] unrecognized feature gate: DNSNameResolver Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757039 4725 feature_gate.go:330] unrecognized feature gate: VSphereMultiVCenters Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757045 4725 feature_gate.go:330] unrecognized feature gate: ChunkSizeMiB Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757050 4725 feature_gate.go:330] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757056 4725 feature_gate.go:330] unrecognized feature gate: NewOLM Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757062 4725 feature_gate.go:351] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757068 4725 feature_gate.go:330] unrecognized feature gate: SigstoreImageVerification Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757096 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallGCP Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757104 4725 feature_gate.go:330] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757112 4725 feature_gate.go:330] unrecognized feature gate: CSIDriverSharedResource Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757118 4725 feature_gate.go:330] unrecognized feature gate: AdminNetworkPolicy Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757123 4725 feature_gate.go:330] unrecognized feature gate: AWSEFSDriverVolumeMetrics Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757129 4725 feature_gate.go:330] unrecognized feature gate: GatewayAPI Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757134 4725 feature_gate.go:330] unrecognized feature gate: RouteAdvertisements Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757139 4725 feature_gate.go:330] unrecognized feature gate: MachineConfigNodes Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.757146 4725 feature_gate.go:330] unrecognized feature gate: MultiArchInstallAWS Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.757155 4725 feature_gate.go:386] feature gates: {map[CloudDualStackNodeIPs:true DisableKubeletCloudCredentialProviders:true DynamicResourceAllocation:false EventedPLEG:false KMSv1:true MaxUnavailableStatefulSet:false NodeSwap:false ProcMountType:false RouteExternalCertificate:false ServiceAccountTokenNodeBinding:false TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:false UserNamespacesSupport:false ValidatingAdmissionPolicy:true VolumeAttributesClass:false]} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.757639 4725 server.go:940] "Client rotation is on, will bootstrap in background" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.760749 4725 bootstrap.go:85] "Current kubeconfig file contents are still valid, no bootstrap necessary" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.760865 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.761477 4725 server.go:997] "Starting client certificate rotation" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.761507 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate rotation is enabled Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.766723 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2026-02-24 05:52:08 +0000 UTC, rotation deadline is 2025-12-13 14:29:51.546099922 +0000 UTC Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.766935 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.773106 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.777216 4725 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.777614 4725 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.788288 4725 log.go:25] "Validated CRI v1 runtime API" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.803333 4725 log.go:25] "Validated CRI v1 image API" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.804889 4725 server.go:1437] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.807742 4725 fs.go:133] Filesystem UUIDs: map[0b076daa-c26a-46d2-b3a6-72a8dbc6e257:/dev/vda4 2026-01-20-11-00-09-00:/dev/sr0 7B77-95E7:/dev/vda2 de0497b0-db1b-465a-b278-03db02455c71:/dev/vda3] Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.807777 4725 fs.go:134] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:42 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:30 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:43 fsType:tmpfs blockSize:0}] Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.826752 4725 manager.go:217] Machine: {Timestamp:2026-01-20 11:04:32.825123566 +0000 UTC m=+1.033445559 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33654124544 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:21801e6708c44f15b81395eb736a7cec SystemUUID:38403e10-86da-4c2a-98da-84319c85ddeb BootID:6eec783f-1471-434e-9e46-81d4bd7eabfe Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6730825728 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:30 Capacity:16827064320 Type:vfs Inodes:1048576 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:42 Capacity:3365408768 Type:vfs Inodes:821633 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:43 Capacity:1073741824 Type:vfs Inodes:4108169 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16827060224 Type:vfs Inodes:4108169 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:a5:5a:0b Speed:0 Mtu:1500} {Name:br-int MacAddress:d6:39:55:2e:22:71 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:a5:5a:0b Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:0c:ba:c8 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:2c:9c:20 Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:f5:f4:84 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:93:ba:44 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:6e:d3:cc:a9:15:45 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:66:6f:1e:cb:28:dc Speed:0 Mtu:1500}] Topology:[{Id:0 Memory:33654124544 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.827037 4725 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.827292 4725 manager.go:233] Version: {KernelVersion:5.14.0-427.50.2.el9_4.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 418.94.202502100215-0 DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.827947 4725 swap_util.go:113] "Swap is on" /proc/swaps contents="Filename\t\t\t\tType\t\tSize\t\tUsed\t\tPriority" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828163 4725 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828211 4725 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828459 4725 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828471 4725 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828728 4725 manager.go:142] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828760 4725 server.go:66] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.828938 4725 state_mem.go:36] "Initialized new in-memory state store" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829042 4725 server.go:1245] "Using root directory" path="/var/lib/kubelet" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829824 4725 kubelet.go:418] "Attempting to sync node with API server" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829848 4725 kubelet.go:313] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829874 4725 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829890 4725 kubelet.go:324] "Adding apiserver pod source" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.829905 4725 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.831992 4725 kuberuntime_manager.go:262] "Container runtime initialized" containerRuntime="cri-o" version="1.31.5-4.rhaos4.18.gitdad78d5.el9" apiVersion="v1" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.832536 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.832610 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.832597 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.832717 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.832984 4725 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-server-current.pem". Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.843367 4725 kubelet.go:854] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844299 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844332 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844342 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844351 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844366 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844377 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844387 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844402 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844415 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844425 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844458 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.844467 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845014 4725 plugins.go:603] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845583 4725 server.go:1280] "Started kubelet" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845952 4725 server.go:163] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.845953 4725 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.846592 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.850790 4725 server.go:236] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 11:04:32 crc systemd[1]: Started Kubernetes Kubelet. Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.852340 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate rotation is enabled Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.852404 4725 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.852984 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-28 02:51:32.011257667 +0000 UTC Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.857440 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.857659 4725 volume_manager.go:287] "The desired_state_of_world populator starts" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.857677 4725 volume_manager.go:289] "Starting Kubelet Volume Manager" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.857738 4725 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.856254 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188c6b9c55a2a206 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:04:32.845554182 +0000 UTC m=+1.053876165,LastTimestamp:2026-01-20 11:04:32.845554182 +0000 UTC m=+1.053876165,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.858167 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.858196 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.858322 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.861393 4725 factory.go:55] Registering systemd factory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.861436 4725 factory.go:221] Registration of the systemd container factory successfully Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.861580 4725 server.go:460] "Adding debug handlers to kubelet server" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862863 4725 factory.go:153] Registering CRI-O factory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862885 4725 factory.go:221] Registration of the crio container factory successfully Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862955 4725 factory.go:219] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.862983 4725 factory.go:103] Registering Raw factory Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.863003 4725 manager.go:1196] Started watching for new ooms in manager Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.863689 4725 manager.go:319] Starting recovery of all containers Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867667 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867722 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867737 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867750 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867762 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867774 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867785 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867796 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867811 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867821 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="efdd0498-1daa-4136-9a4a-3b948c2293fc" volumeName="kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867833 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867846 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867857 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867872 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867883 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" volumeName="kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867894 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867905 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867938 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867951 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867964 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867974 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867986 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.867997 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868008 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868019 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868031 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868043 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868056 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868111 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868126 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" volumeName="kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868138 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868148 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868160 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868173 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868186 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868198 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868210 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868222 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868233 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868244 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868255 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868267 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868279 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868292 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868304 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868319 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868337 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" volumeName="kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868350 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868366 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868382 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868396 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868412 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868507 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868525 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868539 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868552 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868565 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868578 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868589 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868600 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868612 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20b0d48f-5fd6-431c-a545-e3c800c7b866" volumeName="kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868626 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868637 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868649 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868663 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868677 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868689 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868700 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868741 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868754 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868766 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868778 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868790 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868801 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868812 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868824 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.868837 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873735 4725 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873832 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873861 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873888 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3b6479f0-333b-4a96-9adf-2099afdc2447" volumeName="kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873911 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873969 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.873990 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874010 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d75a4c96-2883-4a0b-bab2-0fab2b6c0b49" volumeName="kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874032 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874052 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874073 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874121 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874140 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874157 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874177 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe579f8-e8a6-4643-bce5-a661393c4dde" volumeName="kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874197 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874217 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fda69060-fa79-4696-b1a6-7980f124bf7c" volumeName="kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874235 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874251 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874268 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874338 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874362 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874379 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874396 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874413 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874430 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874449 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874464 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874492 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874515 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874534 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49ef4625-1d3a-4a9f-b595-c2433d32326d" volumeName="kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874554 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874573 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874593 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874612 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874634 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874653 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bf126b07-da06-4140-9a57-dfd54fc6b486" volumeName="kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874676 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874696 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874713 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874731 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="44663579-783b-4372-86d6-acf235a62d72" volumeName="kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874749 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874765 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="4bb40260-dbaa-4fb0-84df-5e680505d512" volumeName="kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874784 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874801 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874817 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6312bbd-5731-4ea0-a20f-81d5a57df44a" volumeName="kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874831 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" volumeName="kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874847 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874864 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" volumeName="kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874879 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874897 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874912 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874929 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874947 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874969 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.874987 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875006 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6509e943-70c6-444c-bc41-48a544e36fbd" volumeName="kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875024 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875042 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="496e6271-fb68-4057-954e-a0d97a4afa3f" volumeName="kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875058 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5b88f790-22fa-440e-b583-365168c0b23d" volumeName="kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875071 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6402fda4-df10-493c-b4e5-d0569419652d" volumeName="kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875136 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875155 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875167 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875182 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875196 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875210 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875226 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875240 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a31745f5-9847-4afe-82a5-3161cc66ca93" volumeName="kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875253 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875271 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875283 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="22c825df-677d-4ca6-82db-3454ed06e783" volumeName="kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875297 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875310 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ea678ab-3438-413e-bfe3-290ae7725660" volumeName="kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875323 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875336 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875350 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875361 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5441d097-087c-4d9a-baa8-b210afa90fc9" volumeName="kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875375 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875389 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875404 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875416 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1d611f23-29be-4491-8495-bee1670e935f" volumeName="kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875430 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875441 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875455 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="925f1c65-6136-48ba-85aa-3a3b50560753" volumeName="kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875467 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c03ee662-fb2f-4fc4-a2c1-af487c19d254" volumeName="kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875480 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875495 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875508 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1386a44e-36a2-460c-96d0-0359d2b6f0f5" volumeName="kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875521 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" volumeName="kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875534 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="87cf06ed-a83f-41a7-828d-70653580a8cb" volumeName="kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875546 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7539238d-5fe0-46ed-884e-1c3b566537ec" volumeName="kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875567 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" volumeName="kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875580 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" volumeName="kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875592 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b78653f-4ff9-4508-8672-245ed9b561e3" volumeName="kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875605 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="25e176fe-21b4-4974-b1ed-c8b94f112a7f" volumeName="kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875619 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" volumeName="kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875631 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3ab1a177-2de0-46d9-b765-d0d0649bb42e" volumeName="kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875644 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09efc573-dbb6-4249-bd59-9b87aba8dd28" volumeName="kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875658 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875671 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e7e6199b-1264-4501-8953-767f51328d08" volumeName="kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875683 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ef543e1b-8068-4ea3-b32a-61027b32e95d" volumeName="kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875698 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875711 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875726 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" volumeName="kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875739 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875751 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="43509403-f426-496e-be36-56cef71462f5" volumeName="kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875764 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875776 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875789 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="57a731c4-ef35-47a8-b875-bfb08a7f8011" volumeName="kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875802 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="1bf7eb37-55a3-4c65-b768-a94c82151e69" volumeName="kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875814 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875827 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875840 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6731426b-95fe-49ff-bb5f-40441049fde2" volumeName="kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875853 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d4552c7-cd75-42dd-8880-30dd377c49a4" volumeName="kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875866 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01ab3dd5-8196-46d0-ad33-122e2ca51def" volumeName="kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875880 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="37a5e44f-9a88-4405-be8a-b645485e7312" volumeName="kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875894 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875907 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9d751cbb-f2e2-430d-9754-c882a5e924a5" volumeName="kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875920 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b574797-001e-440a-8f4e-c0be86edad0f" volumeName="kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875933 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875947 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7bb08738-c794-4ee8-9972-3a62ca171029" volumeName="kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875960 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="8f668bae-612b-4b75-9490-919e737c6a3b" volumeName="kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875972 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" volumeName="kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875984 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" volumeName="kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.875997 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" volumeName="kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876009 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31d8b7a1-420e-4252-a5b7-eebe8a111292" volumeName="kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876022 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" volumeName="kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876037 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5225d0e4-402f-4861-b410-819f433b1803" volumeName="kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876049 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" volumeName="kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876062 4725 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7583ce53-e0fe-4a16-9e4d-50516596a136" volumeName="kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" seLinuxMountContext="" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876076 4725 reconstruct.go:97] "Volume reconstruction finished" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.876112 4725 reconciler.go:26] "Reconciler: start to sync state" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.882811 4725 manager.go:324] Recovery completed Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.895105 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.896670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.896721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.896738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.897766 4725 cpu_manager.go:225] "Starting CPU manager" policy="none" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.897793 4725 cpu_manager.go:226] "Reconciling" reconcilePeriod="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.897817 4725 state_mem.go:36] "Initialized new in-memory state store" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.909224 4725 policy_none.go:49] "None policy: Start" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.910486 4725 memory_manager.go:170] "Starting memorymanager" policy="None" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.910558 4725 state_mem.go:35] "Initializing new in-memory state store" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.928983 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.930936 4725 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.930972 4725 status_manager.go:217] "Starting to sync pod status with apiserver" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.931029 4725 kubelet.go:2335] "Starting kubelet main sync loop" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.931250 4725 kubelet.go:2359] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 11:04:32 crc kubenswrapper[4725]: W0120 11:04:32.932072 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.932178 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.957577 4725 kubelet_node_status.go:503] "Error getting the current node from lister" err="node \"crc\" not found" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.980744 4725 manager.go:334] "Starting Device Plugin manager" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.981345 4725 manager.go:513] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.981373 4725 server.go:79] "Starting device plugin registration server" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982056 4725 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982075 4725 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982444 4725 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982571 4725 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 20 11:04:32 crc kubenswrapper[4725]: I0120 11:04:32.982577 4725 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 11:04:32 crc kubenswrapper[4725]: E0120 11:04:32.989855 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.032175 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc"] Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.032414 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.034584 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.035045 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.035122 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.035984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036238 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036503 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.036541 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037295 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037461 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037499 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.037537 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038355 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.038416 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.039909 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.040004 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.040870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.040955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.041060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.058768 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079114 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079234 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079286 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079329 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079420 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079460 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079593 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079695 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079713 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079732 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.079753 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.083551 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.085410 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.086036 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181129 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181165 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181194 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181249 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181277 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181306 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-usr-local-bin\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181341 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-static-pod-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181392 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181380 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-resource-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181436 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-data-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181444 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-log-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181408 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181474 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f614b9022728cf315e60c057852e563e-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"f614b9022728cf315e60c057852e563e\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181538 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/2139d3e2895fc6797b9c76a1b4c9886d-cert-dir\") pod \"etcd-crc\" (UID: \"2139d3e2895fc6797b9c76a1b4c9886d\") " pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181553 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3dcd261975c3d6b9a6ad6367fd4facd3-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"3dcd261975c3d6b9a6ad6367fd4facd3\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.181662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.182752 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.182966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/d1b160f5dda77d281dd8e69ec8d817f9-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"d1b160f5dda77d281dd8e69ec8d817f9\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.287115 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.288277 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.288774 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.380536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.401357 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.407069 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf4b27818a5e8e43d0dc095d08835c792.slice/crio-be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97 WatchSource:0}: Error finding container be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97: Status 404 returned error can't find the container with id be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97 Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.423722 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf614b9022728cf315e60c057852e563e.slice/crio-1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53 WatchSource:0}: Error finding container 1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53: Status 404 returned error can't find the container with id 1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.431021 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.449912 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3dcd261975c3d6b9a6ad6367fd4facd3.slice/crio-ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005 WatchSource:0}: Error finding container ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005: Status 404 returned error can't find the container with id ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.452853 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.459442 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.463048 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.472986 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd1b160f5dda77d281dd8e69ec8d817f9.slice/crio-a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b WatchSource:0}: Error finding container a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b: Status 404 returned error can't find the container with id a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.689134 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.690328 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.690744 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:33 crc kubenswrapper[4725]: W0120 11:04:33.839652 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:33 crc kubenswrapper[4725]: E0120 11:04:33.839939 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.847592 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.854689 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 10:23:34.62189153 +0000 UTC Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936324 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936391 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936460 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"ae01245f715e7a85876f2d515c21f8753ae5352e8c3e5016674943b533d5ccd4"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.936538 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937493 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937618 4725 generic.go:334] "Generic (PLEG): container finished" podID="d1b160f5dda77d281dd8e69ec8d817f9" containerID="6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937683 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerDied","Data":"6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"a52ddad4327dcdc2d367cffbc3b51dce9d61647332145b3bbb57b4ea703e4b1b"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.937794 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.938834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.938873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.938887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.939871 4725 generic.go:334] "Generic (PLEG): container finished" podID="3dcd261975c3d6b9a6ad6367fd4facd3" containerID="fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.939936 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerDied","Data":"fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.939956 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"ea1146a2c66bcb07cf4c184268f6d2361acf42c817e3726c7c48adee498b4005"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.940037 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941523 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.941918 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"1f527841984fbc9666ac99d6b4a0424b10d7e6c5966460efca598697439dce53"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.946771 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527" exitCode=0 Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.946833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.946876 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"be81bafcce2e00ebec316aca6ad7b9c4a298e6f3417b49b2b66df2c166addc97"} Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.947050 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.948212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.948253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.948266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.951168 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.952013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.952046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:33 crc kubenswrapper[4725]: I0120 11:04:33.952058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.260445 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 20 11:04:34 crc kubenswrapper[4725]: W0120 11:04:34.294364 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.294442 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: W0120 11:04:34.334809 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.334883 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: W0120 11:04:34.460635 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.460830 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.491001 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.492543 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.493311 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.194:6443: connect: connection refused" node="crc" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.820448 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 11:04:34 crc kubenswrapper[4725]: E0120 11:04:34.822108 4725 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.194:6443: connect: connection refused" logger="UnhandledError" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.847380 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.194:6443: connect: connection refused Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.855351 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 05:27:30.890299221 +0000 UTC Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950773 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950784 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.950862 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.951707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.951730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.951738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.959259 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.959896 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.959954 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.962818 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621" exitCode=0 Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.962893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963019 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.963845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.966932 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"d1b160f5dda77d281dd8e69ec8d817f9","Type":"ContainerStarted","Data":"8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.967007 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.968330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.978487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.978544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.984962 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985012 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985027 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"3dcd261975c3d6b9a6ad6367fd4facd3","Type":"ContainerStarted","Data":"9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3"} Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985138 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:34 crc kubenswrapper[4725]: I0120 11:04:34.985985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.855932 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 09:10:01.052319694 +0000 UTC Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.991827 4725 generic.go:334] "Generic (PLEG): container finished" podID="2139d3e2895fc6797b9c76a1b4c9886d" containerID="5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092" exitCode=0 Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.991883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerDied","Data":"5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092"} Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.992467 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.994148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.994212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.994235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999467 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b"} Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999530 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de"} Jan 20 11:04:35 crc kubenswrapper[4725]: I0120 11:04:35.999470 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.000764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.000816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.000838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.001814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.002115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.002323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.093543 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.094959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.095121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.095156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.095202 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:36 crc kubenswrapper[4725]: I0120 11:04:36.856977 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-13 19:29:45.778583141 +0000 UTC Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007115 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007186 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007223 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7"} Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007254 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.007350 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.008098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.008142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.008158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:37 crc kubenswrapper[4725]: I0120 11:04:37.858443 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 15:10:05.759115011 +0000 UTC Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.018214 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.018216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"2139d3e2895fc6797b9c76a1b4c9886d","Type":"ContainerStarted","Data":"2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c"} Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.018226 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019457 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.019666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.135170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.859645 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 07:06:04.40167717 +0000 UTC Jan 20 11:04:38 crc kubenswrapper[4725]: I0120 11:04:38.992016 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Rotating certificates Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.022298 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.022317 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.024166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.793233 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.793543 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.795175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.795219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.795236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.799385 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:39 crc kubenswrapper[4725]: I0120 11:04:39.859871 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 02:45:48.355899048 +0000 UTC Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.025620 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.025759 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.027186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.027255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.027269 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.066793 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.784992 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.785411 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.787182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.787244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.787261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:40 crc kubenswrapper[4725]: I0120 11:04:40.860891 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 14:21:08.734732409 +0000 UTC Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.028272 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.029260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.029299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.029309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:41 crc kubenswrapper[4725]: I0120 11:04:41.861837 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 21:51:10.153713618 +0000 UTC Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.030805 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.031920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.031962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.031975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.296224 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-etcd/etcd-crc" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.296499 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.299540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.299583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.299594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.408730 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:42 crc kubenswrapper[4725]: I0120 11:04:42.862498 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-10 02:28:38.476715192 +0000 UTC Jan 20 11:04:42 crc kubenswrapper[4725]: E0120 11:04:42.990040 4725 eviction_manager.go:285] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.034282 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.122599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.122732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.122763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.127005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.870387 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 17:56:50.002605137 +0000 UTC Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.870933 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.871977 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.875256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.875325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:43 crc kubenswrapper[4725]: I0120 11:04:43.875341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.114124 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.115129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.115185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.115194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.240388 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.240618 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.242883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.242910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.242919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:44 crc kubenswrapper[4725]: I0120 11:04:44.870800 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 07:14:55.583325404 +0000 UTC Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.409250 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.409350 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://192.168.126.11:10357/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.848581 4725 csi_plugin.go:884] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:45 crc kubenswrapper[4725]: E0120 11:04:45.861903 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="3.2s" Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.871090 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:58:28.467703875 +0000 UTC Jan 20 11:04:45 crc kubenswrapper[4725]: W0120 11:04:45.946298 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:45 crc kubenswrapper[4725]: I0120 11:04:45.946443 4725 trace.go:236] Trace[2044783135]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:35.944) (total time: 10001ms): Jan 20 11:04:45 crc kubenswrapper[4725]: Trace[2044783135]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:45.946) Jan 20 11:04:45 crc kubenswrapper[4725]: Trace[2044783135]: [10.001578506s] [10.001578506s] END Jan 20 11:04:45 crc kubenswrapper[4725]: E0120 11:04:45.946474 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.096566 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": net/http: TLS handshake timeout" node="crc" Jan 20 11:04:46 crc kubenswrapper[4725]: W0120 11:04:46.317779 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.317861 4725 trace.go:236] Trace[892702363]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:36.316) (total time: 10001ms): Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[892702363]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:46.317) Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[892702363]: [10.001270546s] [10.001270546s] END Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.317882 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: W0120 11:04:46.621027 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.621157 4725 trace.go:236] Trace[94559695]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:36.619) (total time: 10001ms): Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[94559695]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:46.621) Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[94559695]: [10.001895965s] [10.001895965s] END Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.621193 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: W0120 11:04:46.735203 4725 reflector.go:561] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.735313 4725 trace.go:236] Trace[1818064882]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (20-Jan-2026 11:04:36.733) (total time: 10001ms): Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[1818064882]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (11:04:46.735) Jan 20 11:04:46 crc kubenswrapper[4725]: Trace[1818064882]: [10.001541063s] [10.001541063s] END Jan 20 11:04:46 crc kubenswrapper[4725]: E0120 11:04:46.735340 4725 reflector.go:158] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" Jan 20 11:04:46 crc kubenswrapper[4725]: I0120 11:04:46.871351 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 05:00:50.871769403 +0000 UTC Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.344141 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.344191 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.355539 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.355588 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 20 11:04:47 crc kubenswrapper[4725]: I0120 11:04:47.871802 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 21:24:56.316932802 +0000 UTC Jan 20 11:04:48 crc kubenswrapper[4725]: I0120 11:04:48.872369 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-15 06:57:03.474293561 +0000 UTC Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.297343 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.300314 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:49 crc kubenswrapper[4725]: E0120 11:04:49.306121 4725 kubelet_node_status.go:99] "Unable to register node with API server" err="nodes \"crc\" is forbidden: autoscaling.openshift.io/ManagedNode infra config cache not synchronized" node="crc" Jan 20 11:04:49 crc kubenswrapper[4725]: I0120 11:04:49.873128 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 21:28:00.557087909 +0000 UTC Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.407293 4725 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.791033 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.791839 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.793708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.793738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.793756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.795907 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:50 crc kubenswrapper[4725]: I0120 11:04:50.873803 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 08:54:36.701888586 +0000 UTC Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.134136 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.135121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.135276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.135407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.512216 4725 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:51 crc kubenswrapper[4725]: I0120 11:04:51.874792 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 17:53:03.958923742 +0000 UTC Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.342861 4725 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.353612 4725 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448115 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46402->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448149 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Liveness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46394->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448489 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46394->192.168.126.11:17697: read: connection reset by peer" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.448387 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:46402->192.168.126.11:17697: read: connection reset by peer" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.450871 4725 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.450948 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.496289 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.496476 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.499278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.499327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.499351 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.500693 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.584200 4725 csr.go:261] certificate signing request csr-pnmds is approved, waiting to be issued Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.603368 4725 csr.go:257] certificate signing request csr-pnmds is issued Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.609712 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.771351 4725 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 20 11:04:52 crc kubenswrapper[4725]: E0120 11:04:52.771981 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Post \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases?timeout=10s\": read tcp 38.102.83.194:35108->38.102.83.194:6443: use of closed network connection" interval="6.4s" Jan 20 11:04:52 crc kubenswrapper[4725]: W0120 11:04:52.771992 4725 reflector.go:484] k8s.io/client-go/informers/factory.go:160: watch of *v1.CSIDriver ended with: very short watch: k8s.io/client-go/informers/factory.go:160: Unexpected watch close - watch lasted less than a second and no items received Jan 20 11:04:52 crc kubenswrapper[4725]: E0120 11:04:52.771975 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-machine-config-operator/events\": read tcp 38.102.83.194:35108->38.102.83.194:6443: use of closed network connection" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188c6b9c7b4912fe openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:d1b160f5dda77d281dd8e69ec8d817f9,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:04:33.47721907 +0000 UTC m=+1.685541053,LastTimestamp:2026-01-20 11:04:33.47721907 +0000 UTC m=+1.685541053,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.875815 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-15 14:15:55.897358648 +0000 UTC Jan 20 11:04:52 crc kubenswrapper[4725]: I0120 11:04:52.979057 4725 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.141458 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.144267 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b" exitCode=255 Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.144407 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerDied","Data":"809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b"} Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.264435 4725 scope.go:117] "RemoveContainer" containerID="809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.740018 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Certificate expiration is 2027-01-20 10:59:52 +0000 UTC, rotation deadline is 2026-11-10 05:23:34.8368344 +0000 UTC Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.740075 4725 certificate_manager.go:356] kubernetes.io/kube-apiserver-client-kubelet: Waiting 7050h18m41.09676263s for next certificate rotation Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.876570 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 15:02:58.946389832 +0000 UTC Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.876623 4725 apiserver.go:52] "Watching apiserver" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.879802 4725 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880107 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-network-console/networking-console-plugin-85b44fc459-gdk6g","openshift-network-diagnostics/network-check-source-55646444c4-trplf","openshift-network-diagnostics/network-check-target-xd92c","openshift-network-node-identity/network-node-identity-vrzqb","openshift-network-operator/iptables-alerter-4ln5h","openshift-network-operator/network-operator-58b4c7f79c-55gtf","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc"] Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880439 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880511 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880514 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880540 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880778 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.880954 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:53 crc kubenswrapper[4725]: E0120 11:04:53.880979 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:53 crc kubenswrapper[4725]: E0120 11:04:53.880999 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:53 crc kubenswrapper[4725]: E0120 11:04:53.880953 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.882642 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.882868 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.882951 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883039 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883059 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883782 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.883957 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.884180 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.890599 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.904964 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.905142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.919539 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.924444 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd/etcd-crc"] Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.934713 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.941908 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.950538 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.958872 4725 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 20 11:04:53 crc kubenswrapper[4725]: I0120 11:04:53.961321 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040227 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040285 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040309 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040324 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040382 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040401 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040442 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040458 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040482 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040497 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040512 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040531 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040585 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") pod \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\" (UID: \"cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040601 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040618 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040670 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040686 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040701 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040718 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040744 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040769 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040828 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040844 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040877 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040891 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040931 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040951 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040967 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040981 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.040996 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041010 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041024 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041110 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041140 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041155 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041170 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041184 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041214 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041230 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041245 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041260 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041288 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041314 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041330 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041346 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041378 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") pod \"31d8b7a1-420e-4252-a5b7-eebe8a111292\" (UID: \"31d8b7a1-420e-4252-a5b7-eebe8a111292\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041393 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041408 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041454 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041468 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041552 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041585 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041617 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041648 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041663 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041680 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041724 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041739 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041754 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041773 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") pod \"efdd0498-1daa-4136-9a4a-3b948c2293fc\" (UID: \"efdd0498-1daa-4136-9a4a-3b948c2293fc\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041815 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041830 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041845 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041877 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041892 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") pod \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\" (UID: \"b6312bbd-5731-4ea0-a20f-81d5a57df44a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041908 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041923 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.041938 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") pod \"6509e943-70c6-444c-bc41-48a544e36fbd\" (UID: \"6509e943-70c6-444c-bc41-48a544e36fbd\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.042387 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk" (OuterVolumeSpecName: "kube-api-access-rnphk") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "kube-api-access-rnphk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.042676 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh" (OuterVolumeSpecName: "kube-api-access-x4zgh") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "kube-api-access-x4zgh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043341 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043465 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043570 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf" (OuterVolumeSpecName: "kube-api-access-7c4vf") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "kube-api-access-7c4vf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.043676 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044002 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044018 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca" (OuterVolumeSpecName: "client-ca") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7" (OuterVolumeSpecName: "kube-api-access-9xfj7") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "kube-api-access-9xfj7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044230 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52" (OuterVolumeSpecName: "kube-api-access-s4n52") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "kube-api-access-s4n52". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044481 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044500 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds" (OuterVolumeSpecName: "kube-api-access-w9rds") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "kube-api-access-w9rds". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044654 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044786 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044997 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh" (OuterVolumeSpecName: "kube-api-access-x7zkh") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "kube-api-access-x7zkh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.044995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images" (OuterVolumeSpecName: "images") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045016 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045200 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr" (OuterVolumeSpecName: "kube-api-access-249nr") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "kube-api-access-249nr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045321 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045354 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045672 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.045692 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046451 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046501 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp" (OuterVolumeSpecName: "kube-api-access-qs4fp") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "kube-api-access-qs4fp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046638 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c" (OuterVolumeSpecName: "kube-api-access-tk88c") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "kube-api-access-tk88c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.046683 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85" (OuterVolumeSpecName: "kube-api-access-x2m85") pod "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" (UID: "cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d"). InnerVolumeSpecName "kube-api-access-x2m85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047030 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047287 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047348 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047568 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") pod \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\" (UID: \"210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047592 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047626 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047643 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047690 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047722 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047739 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047770 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047786 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") pod \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\" (UID: \"b6cd30de-2eeb-49a2-ab40-9167f4560ff5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047801 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") pod \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\" (UID: \"3cb93b32-e0ae-4377-b9c8-fdb9842c6d59\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047845 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047866 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") pod \"5b88f790-22fa-440e-b583-365168c0b23d\" (UID: \"5b88f790-22fa-440e-b583-365168c0b23d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047959 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047987 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") pod \"6ea678ab-3438-413e-bfe3-290ae7725660\" (UID: \"6ea678ab-3438-413e-bfe3-290ae7725660\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048004 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") pod \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\" (UID: \"25e176fe-21b4-4974-b1ed-c8b94f112a7f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048019 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048035 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") pod \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\" (UID: \"308be0ea-9f5f-4b29-aeb1-5abd31a0b17b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048051 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048066 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048102 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") pod \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\" (UID: \"1386a44e-36a2-460c-96d0-0359d2b6f0f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048117 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048133 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") pod \"01ab3dd5-8196-46d0-ad33-122e2ca51def\" (UID: \"01ab3dd5-8196-46d0-ad33-122e2ca51def\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048150 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048167 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048186 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048204 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048221 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048238 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") pod \"49ef4625-1d3a-4a9f-b595-c2433d32326d\" (UID: \"49ef4625-1d3a-4a9f-b595-c2433d32326d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048253 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") pod \"496e6271-fb68-4057-954e-a0d97a4afa3f\" (UID: \"496e6271-fb68-4057-954e-a0d97a4afa3f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048268 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") pod \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\" (UID: \"a0128f3a-b052-44ed-a84e-c4c8aaf17c13\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048299 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") pod \"fda69060-fa79-4696-b1a6-7980f124bf7c\" (UID: \"fda69060-fa79-4696-b1a6-7980f124bf7c\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048315 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") pod \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\" (UID: \"f88749ec-7931-4ee7-b3fc-1ec5e11f92e9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048354 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048375 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") pod \"7539238d-5fe0-46ed-884e-1c3b566537ec\" (UID: \"7539238d-5fe0-46ed-884e-1c3b566537ec\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048392 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048407 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048422 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") pod \"0b78653f-4ff9-4508-8672-245ed9b561e3\" (UID: \"0b78653f-4ff9-4508-8672-245ed9b561e3\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048438 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048456 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048471 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048493 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048516 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") pod \"20b0d48f-5fd6-431c-a545-e3c800c7b866\" (UID: \"20b0d48f-5fd6-431c-a545-e3c800c7b866\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048532 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048547 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") pod \"22c825df-677d-4ca6-82db-3454ed06e783\" (UID: \"22c825df-677d-4ca6-82db-3454ed06e783\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048563 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048579 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") pod \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\" (UID: \"bd23aa5c-e532-4e53-bccf-e79f130c5ae8\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048595 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048612 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") pod \"6402fda4-df10-493c-b4e5-d0569419652d\" (UID: \"6402fda4-df10-493c-b4e5-d0569419652d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048628 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048643 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048659 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048674 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048689 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") pod \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\" (UID: \"bc5039c0-ea34-426b-a2b7-fbbc87b49a6d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048705 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048723 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") pod \"925f1c65-6136-48ba-85aa-3a3b50560753\" (UID: \"925f1c65-6136-48ba-85aa-3a3b50560753\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048738 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") pod \"6731426b-95fe-49ff-bb5f-40441049fde2\" (UID: \"6731426b-95fe-49ff-bb5f-40441049fde2\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048771 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") pod \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\" (UID: \"3ab1a177-2de0-46d9-b765-d0d0649bb42e\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048804 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048821 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") pod \"1d611f23-29be-4491-8495-bee1670e935f\" (UID: \"1d611f23-29be-4491-8495-bee1670e935f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048874 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048892 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") pod \"bf126b07-da06-4140-9a57-dfd54fc6b486\" (UID: \"bf126b07-da06-4140-9a57-dfd54fc6b486\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048909 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") pod \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\" (UID: \"b11524ee-3fca-4b1b-9cdf-6da289fdbc7d\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048925 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048941 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") pod \"9d4552c7-cd75-42dd-8880-30dd377c49a4\" (UID: \"9d4552c7-cd75-42dd-8880-30dd377c49a4\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048961 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048978 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.048994 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049010 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") pod \"a31745f5-9847-4afe-82a5-3161cc66ca93\" (UID: \"a31745f5-9847-4afe-82a5-3161cc66ca93\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049027 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") pod \"7583ce53-e0fe-4a16-9e4d-50516596a136\" (UID: \"7583ce53-e0fe-4a16-9e4d-50516596a136\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") pod \"5fe579f8-e8a6-4643-bce5-a661393c4dde\" (UID: \"5fe579f8-e8a6-4643-bce5-a661393c4dde\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049059 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") pod \"5225d0e4-402f-4861-b410-819f433b1803\" (UID: \"5225d0e4-402f-4861-b410-819f433b1803\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049106 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.047517 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config" (OuterVolumeSpecName: "config") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") pod \"1bf7eb37-55a3-4c65-b768-a94c82151e69\" (UID: \"1bf7eb37-55a3-4c65-b768-a94c82151e69\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049142 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") pod \"7bb08738-c794-4ee8-9972-3a62ca171029\" (UID: \"7bb08738-c794-4ee8-9972-3a62ca171029\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049160 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") pod \"44663579-783b-4372-86d6-acf235a62d72\" (UID: \"44663579-783b-4372-86d6-acf235a62d72\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049206 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049362 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049435 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn" (OuterVolumeSpecName: "kube-api-access-lz9wn") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "kube-api-access-lz9wn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.049769 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050964 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") pod \"4bb40260-dbaa-4fb0-84df-5e680505d512\" (UID: \"4bb40260-dbaa-4fb0-84df-5e680505d512\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050977 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.050987 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") pod \"0b574797-001e-440a-8f4e-c0be86edad0f\" (UID: \"0b574797-001e-440a-8f4e-c0be86edad0f\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051114 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") pod \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\" (UID: \"49c341d1-5089-4bc2-86a0-a5e165cfcc6b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051142 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051251 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051278 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") pod \"09efc573-dbb6-4249-bd59-9b87aba8dd28\" (UID: \"09efc573-dbb6-4249-bd59-9b87aba8dd28\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051295 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051303 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") pod \"5441d097-087c-4d9a-baa8-b210afa90fc9\" (UID: \"5441d097-087c-4d9a-baa8-b210afa90fc9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051327 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") pod \"57a731c4-ef35-47a8-b875-bfb08a7f8011\" (UID: \"57a731c4-ef35-47a8-b875-bfb08a7f8011\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051353 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") pod \"87cf06ed-a83f-41a7-828d-70653580a8cb\" (UID: \"87cf06ed-a83f-41a7-828d-70653580a8cb\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051375 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051397 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") pod \"e7e6199b-1264-4501-8953-767f51328d08\" (UID: \"e7e6199b-1264-4501-8953-767f51328d08\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051444 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") pod \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\" (UID: \"8cea82b4-6893-4ddc-af9f-1bb5ae425c5b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051477 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051501 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") pod \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\" (UID: \"96b93a3a-6083-4aea-8eab-fe1aa8245ad9\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051525 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") pod \"43509403-f426-496e-be36-56cef71462f5\" (UID: \"43509403-f426-496e-be36-56cef71462f5\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051549 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") pod \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\" (UID: \"09ae3b1a-e8e7-4524-b54b-61eab6f9239a\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051569 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051572 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") pod \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\" (UID: \"c03ee662-fb2f-4fc4-a2c1-af487c19d254\") " Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051712 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051731 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051769 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051804 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051857 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051874 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051977 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051988 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.051998 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053839 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x4zgh\" (UniqueName: \"kubernetes.io/projected/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-kube-api-access-x4zgh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053854 4725 reconciler_common.go:293] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fda69060-fa79-4696-b1a6-7980f124bf7c-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053863 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053940 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.053968 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-s4n52\" (UniqueName: \"kubernetes.io/projected/925f1c65-6136-48ba-85aa-3a3b50560753-kube-api-access-s4n52\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054035 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-images\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054063 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tk88c\" (UniqueName: \"kubernetes.io/projected/7539238d-5fe0-46ed-884e-1c3b566537ec-kube-api-access-tk88c\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054100 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x2m85\" (UniqueName: \"kubernetes.io/projected/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d-kube-api-access-x2m85\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054116 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054133 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qs4fp\" (UniqueName: \"kubernetes.io/projected/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-kube-api-access-qs4fp\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054149 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rnphk\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-kube-api-access-rnphk\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054164 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7c4vf\" (UniqueName: \"kubernetes.io/projected/22c825df-677d-4ca6-82db-3454ed06e783-kube-api-access-7c4vf\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054179 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w9rds\" (UniqueName: \"kubernetes.io/projected/20b0d48f-5fd6-431c-a545-e3c800c7b866-kube-api-access-w9rds\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054196 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1386a44e-36a2-460c-96d0-0359d2b6f0f5-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054213 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9xfj7\" (UniqueName: \"kubernetes.io/projected/5225d0e4-402f-4861-b410-819f433b1803-kube-api-access-9xfj7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054227 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x7zkh\" (UniqueName: \"kubernetes.io/projected/6731426b-95fe-49ff-bb5f-40441049fde2-kube-api-access-x7zkh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054242 4725 reconciler_common.go:293] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054256 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054269 4725 reconciler_common.go:293] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054282 4725 reconciler_common.go:293] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054295 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054308 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054321 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/31d8b7a1-420e-4252-a5b7-eebe8a111292-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054334 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-249nr\" (UniqueName: \"kubernetes.io/projected/b6312bbd-5731-4ea0-a20f-81d5a57df44a-kube-api-access-249nr\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054347 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/925f1c65-6136-48ba-85aa-3a3b50560753-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054363 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054379 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054392 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054387 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054406 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/8f668bae-612b-4b75-9490-919e737c6a3b-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054423 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054440 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054671 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.054785 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs" (OuterVolumeSpecName: "kube-api-access-pcxfs") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "kube-api-access-pcxfs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh" (OuterVolumeSpecName: "kube-api-access-xcgwh") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "kube-api-access-xcgwh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca" (OuterVolumeSpecName: "service-ca") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055454 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055621 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8" (OuterVolumeSpecName: "kube-api-access-6ccd8") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "kube-api-access-6ccd8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055691 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config" (OuterVolumeSpecName: "config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055787 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055836 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.055861 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.056034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl" (OuterVolumeSpecName: "kube-api-access-xcphl") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "kube-api-access-xcphl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.057353 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx" (OuterVolumeSpecName: "kube-api-access-d6qdx") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "kube-api-access-d6qdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.058640 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config" (OuterVolumeSpecName: "config") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.058729 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca" (OuterVolumeSpecName: "serviceca") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.058850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059045 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7" (OuterVolumeSpecName: "kube-api-access-sb6h7") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "kube-api-access-sb6h7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059305 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059401 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config" (OuterVolumeSpecName: "config") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059761 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.059932 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060055 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt" (OuterVolumeSpecName: "kube-api-access-fqsjt") pod "efdd0498-1daa-4136-9a4a-3b948c2293fc" (UID: "efdd0498-1daa-4136-9a4a-3b948c2293fc"). InnerVolumeSpecName "kube-api-access-fqsjt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060172 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782" (OuterVolumeSpecName: "kube-api-access-pj782") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "kube-api-access-pj782". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060171 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key" (OuterVolumeSpecName: "signing-key") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060199 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060458 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060759 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.060960 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061103 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061347 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061556 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7" (OuterVolumeSpecName: "kube-api-access-nzwt7") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "kube-api-access-nzwt7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.061783 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5" (OuterVolumeSpecName: "kube-api-access-qg5z5") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "kube-api-access-qg5z5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.062145 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5" (OuterVolumeSpecName: "kube-api-access-zgdk5") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "kube-api-access-zgdk5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.062167 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "31d8b7a1-420e-4252-a5b7-eebe8a111292" (UID: "31d8b7a1-420e-4252-a5b7-eebe8a111292"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.062848 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.063206 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config" (OuterVolumeSpecName: "config") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.063217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca" (OuterVolumeSpecName: "service-ca") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.063815 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064036 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config" (OuterVolumeSpecName: "config") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064106 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn" (OuterVolumeSpecName: "kube-api-access-jkwtn") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "kube-api-access-jkwtn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.064222 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.564194229 +0000 UTC m=+22.772516192 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064568 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.064737 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz" (OuterVolumeSpecName: "kube-api-access-2d4wz") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "kube-api-access-2d4wz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065326 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065555 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065576 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv" (OuterVolumeSpecName: "kube-api-access-zkvpv") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "kube-api-access-zkvpv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.065856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88" (OuterVolumeSpecName: "kube-api-access-lzf88") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "kube-api-access-lzf88". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.066289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config" (OuterVolumeSpecName: "config") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.066736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.072656 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.072984 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs" (OuterVolumeSpecName: "certs") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077282 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "0b78653f-4ff9-4508-8672-245ed9b561e3" (UID: "0b78653f-4ff9-4508-8672-245ed9b561e3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg" (OuterVolumeSpecName: "kube-api-access-dbsvg") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "kube-api-access-dbsvg". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "b6312bbd-5731-4ea0-a20f-81d5a57df44a" (UID: "b6312bbd-5731-4ea0-a20f-81d5a57df44a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077706 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077790 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.077887 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8" (OuterVolumeSpecName: "kube-api-access-wxkg8") pod "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" (UID: "3cb93b32-e0ae-4377-b9c8-fdb9842c6d59"). InnerVolumeSpecName "kube-api-access-wxkg8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078058 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b6cd30de-2eeb-49a2-ab40-9167f4560ff5" (UID: "b6cd30de-2eeb-49a2-ab40-9167f4560ff5"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078275 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078284 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities" (OuterVolumeSpecName: "utilities") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078416 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078473 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "5b88f790-22fa-440e-b583-365168c0b23d" (UID: "5b88f790-22fa-440e-b583-365168c0b23d"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078571 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities" (OuterVolumeSpecName: "utilities") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.078707 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc" (OuterVolumeSpecName: "kube-api-access-vt5rc") pod "44663579-783b-4372-86d6-acf235a62d72" (UID: "44663579-783b-4372-86d6-acf235a62d72"). InnerVolumeSpecName "kube-api-access-vt5rc". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079338 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079532 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079722 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf" (OuterVolumeSpecName: "kube-api-access-v47cf") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "kube-api-access-v47cf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.079928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert" (OuterVolumeSpecName: "cert") pod "20b0d48f-5fd6-431c-a545-e3c800c7b866" (UID: "20b0d48f-5fd6-431c-a545-e3c800c7b866"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.080126 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.080460 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "22c825df-677d-4ca6-82db-3454ed06e783" (UID: "22c825df-677d-4ca6-82db-3454ed06e783"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.083536 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.084001 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh" (OuterVolumeSpecName: "kube-api-access-2w9zh") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "kube-api-access-2w9zh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.097196 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.097504 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config" (OuterVolumeSpecName: "config") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.097976 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098241 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2" (OuterVolumeSpecName: "kube-api-access-jhbk2") pod "bd23aa5c-e532-4e53-bccf-e79f130c5ae8" (UID: "bd23aa5c-e532-4e53-bccf-e79f130c5ae8"). InnerVolumeSpecName "kube-api-access-jhbk2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098419 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd" (OuterVolumeSpecName: "kube-api-access-mnrrd") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "kube-api-access-mnrrd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098582 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.098752 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "a31745f5-9847-4afe-82a5-3161cc66ca93" (UID: "a31745f5-9847-4afe-82a5-3161cc66ca93"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.099217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume" (OuterVolumeSpecName: "config-volume") pod "87cf06ed-a83f-41a7-828d-70653580a8cb" (UID: "87cf06ed-a83f-41a7-828d-70653580a8cb"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.099535 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7" (OuterVolumeSpecName: "kube-api-access-kfwg7") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "kube-api-access-kfwg7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.107158 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp" (OuterVolumeSpecName: "kube-api-access-fcqwp") pod "5fe579f8-e8a6-4643-bce5-a661393c4dde" (UID: "5fe579f8-e8a6-4643-bce5-a661393c4dde"). InnerVolumeSpecName "kube-api-access-fcqwp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.107674 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" (UID: "bc5039c0-ea34-426b-a2b7-fbbc87b49a6d"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.108190 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.111147 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz" (OuterVolumeSpecName: "kube-api-access-8tdtz") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "kube-api-access-8tdtz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.111372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "6731426b-95fe-49ff-bb5f-40441049fde2" (UID: "6731426b-95fe-49ff-bb5f-40441049fde2"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.117184 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159276 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159373 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config" (OuterVolumeSpecName: "config") pod "e7e6199b-1264-4501-8953-767f51328d08" (UID: "e7e6199b-1264-4501-8953-767f51328d08"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159444 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv" (OuterVolumeSpecName: "kube-api-access-d4lsv") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "kube-api-access-d4lsv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159681 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities" (OuterVolumeSpecName: "utilities") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159868 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "4bb40260-dbaa-4fb0-84df-5e680505d512" (UID: "4bb40260-dbaa-4fb0-84df-5e680505d512"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.159889 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160070 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160580 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.161131 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "925f1c65-6136-48ba-85aa-3a3b50560753" (UID: "925f1c65-6136-48ba-85aa-3a3b50560753"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.161583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "09ae3b1a-e8e7-4524-b54b-61eab6f9239a" (UID: "09ae3b1a-e8e7-4524-b54b-61eab6f9239a"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.161796 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.160458 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz" (OuterVolumeSpecName: "kube-api-access-bf2bz") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "kube-api-access-bf2bz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162118 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config" (OuterVolumeSpecName: "config") pod "5441d097-087c-4d9a-baa8-b210afa90fc9" (UID: "5441d097-087c-4d9a-baa8-b210afa90fc9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162235 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities" (OuterVolumeSpecName: "utilities") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162350 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config" (OuterVolumeSpecName: "config") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162113 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb" (OuterVolumeSpecName: "kube-api-access-279lb") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "kube-api-access-279lb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162435 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.162876 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4" (OuterVolumeSpecName: "kube-api-access-w4xd4") pod "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" (UID: "8cea82b4-6893-4ddc-af9f-1bb5ae425c5b"). InnerVolumeSpecName "kube-api-access-w4xd4". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.163587 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images" (OuterVolumeSpecName: "images") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.163950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config" (OuterVolumeSpecName: "console-config") pod "43509403-f426-496e-be36-56cef71462f5" (UID: "43509403-f426-496e-be36-56cef71462f5"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.167914 4725 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.176281 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-c9dck"] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.176693 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.177424 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "bf126b07-da06-4140-9a57-dfd54fc6b486" (UID: "bf126b07-da06-4140-9a57-dfd54fc6b486"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.178692 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9d4552c7-cd75-42dd-8880-30dd377c49a4" (UID: "9d4552c7-cd75-42dd-8880-30dd377c49a4"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.179405 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.180398 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" (UID: "b11524ee-3fca-4b1b-9cdf-6da289fdbc7d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.181541 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz" (OuterVolumeSpecName: "kube-api-access-6g6sz") pod "6509e943-70c6-444c-bc41-48a544e36fbd" (UID: "6509e943-70c6-444c-bc41-48a544e36fbd"). InnerVolumeSpecName "kube-api-access-6g6sz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.181931 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6" (OuterVolumeSpecName: "kube-api-access-htfz6") pod "6ea678ab-3438-413e-bfe3-290ae7725660" (UID: "6ea678ab-3438-413e-bfe3-290ae7725660"). InnerVolumeSpecName "kube-api-access-htfz6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182257 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e7e6199b-1264-4501-8953-767f51328d08-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182275 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lz9wn\" (UniqueName: \"kubernetes.io/projected/a31745f5-9847-4afe-82a5-3161cc66ca93-kube-api-access-lz9wn\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182285 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/efdd0498-1daa-4136-9a4a-3b948c2293fc-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182296 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcphl\" (UniqueName: \"kubernetes.io/projected/7583ce53-e0fe-4a16-9e4d-50516596a136-kube-api-access-xcphl\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182308 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dbsvg\" (UniqueName: \"kubernetes.io/projected/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-kube-api-access-dbsvg\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182320 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182332 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6ea678ab-3438-413e-bfe3-290ae7725660-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182343 4725 reconciler_common.go:293] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-images\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182356 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7539238d-5fe0-46ed-884e-1c3b566537ec-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182668 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6509e943-70c6-444c-bc41-48a544e36fbd-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182688 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6g6sz\" (UniqueName: \"kubernetes.io/projected/6509e943-70c6-444c-bc41-48a544e36fbd-kube-api-access-6g6sz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182701 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/e7e6199b-1264-4501-8953-767f51328d08-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182714 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d4lsv\" (UniqueName: \"kubernetes.io/projected/25e176fe-21b4-4974-b1ed-c8b94f112a7f-kube-api-access-d4lsv\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182726 4725 reconciler_common.go:293] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182748 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01ab3dd5-8196-46d0-ad33-122e2ca51def-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182766 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182779 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wxkg8\" (UniqueName: \"kubernetes.io/projected/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-kube-api-access-wxkg8\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182801 4725 reconciler_common.go:293] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-console-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182818 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182832 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182845 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/5b88f790-22fa-440e-b583-365168c0b23d-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.182959 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.183358 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit" (OuterVolumeSpecName: "audit") pod "1bf7eb37-55a3-4c65-b768-a94c82151e69" (UID: "1bf7eb37-55a3-4c65-b768-a94c82151e69"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.183871 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config" (OuterVolumeSpecName: "config") pod "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" (UID: "210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.183912 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config" (OuterVolumeSpecName: "config") pod "1386a44e-36a2-460c-96d0-0359d2b6f0f5" (UID: "1386a44e-36a2-460c-96d0-0359d2b6f0f5"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184178 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184346 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184361 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184372 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9d4552c7-cd75-42dd-8880-30dd377c49a4-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184382 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184392 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-htfz6\" (UniqueName: \"kubernetes.io/projected/6ea678ab-3438-413e-bfe3-290ae7725660-kube-api-access-htfz6\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184402 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/0b78653f-4ff9-4508-8672-245ed9b561e3-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184411 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4bb40260-dbaa-4fb0-84df-5e680505d512-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184475 4725 reconciler_common.go:293] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/20b0d48f-5fd6-431c-a545-e3c800c7b866-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184501 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184510 4725 reconciler_common.go:293] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184519 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v47cf\" (UniqueName: \"kubernetes.io/projected/c03ee662-fb2f-4fc4-a2c1-af487c19d254-kube-api-access-v47cf\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184528 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mnrrd\" (UniqueName: \"kubernetes.io/projected/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-kube-api-access-mnrrd\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184538 4725 reconciler_common.go:293] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/6402fda4-df10-493c-b4e5-d0569419652d-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184547 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184556 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184566 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jhbk2\" (UniqueName: \"kubernetes.io/projected/bd23aa5c-e532-4e53-bccf-e79f130c5ae8-kube-api-access-jhbk2\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184575 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a31745f5-9847-4afe-82a5-3161cc66ca93-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184583 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/87cf06ed-a83f-41a7-828d-70653580a8cb-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184592 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-kfwg7\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-kube-api-access-kfwg7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184602 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fcqwp\" (UniqueName: \"kubernetes.io/projected/5fe579f8-e8a6-4643-bce5-a661393c4dde-kube-api-access-fcqwp\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184611 4725 reconciler_common.go:293] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184620 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184629 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184638 4725 reconciler_common.go:293] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/6731426b-95fe-49ff-bb5f-40441049fde2-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184647 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8tdtz\" (UniqueName: \"kubernetes.io/projected/09efc573-dbb6-4249-bd59-9b87aba8dd28-kube-api-access-8tdtz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184657 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184666 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lzf88\" (UniqueName: \"kubernetes.io/projected/0b574797-001e-440a-8f4e-c0be86edad0f-kube-api-access-lzf88\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184675 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184685 4725 reconciler_common.go:293] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184696 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zkvpv\" (UniqueName: \"kubernetes.io/projected/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-kube-api-access-zkvpv\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184705 4725 reconciler_common.go:293] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/5fe579f8-e8a6-4643-bce5-a661393c4dde-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184714 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184723 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/a31745f5-9847-4afe-82a5-3161cc66ca93-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184732 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184741 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184750 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vt5rc\" (UniqueName: \"kubernetes.io/projected/44663579-783b-4372-86d6-acf235a62d72-kube-api-access-vt5rc\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184759 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184768 4725 reconciler_common.go:293] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184777 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2w9zh\" (UniqueName: \"kubernetes.io/projected/4bb40260-dbaa-4fb0-84df-5e680505d512-kube-api-access-2w9zh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184785 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/0b574797-001e-440a-8f4e-c0be86edad0f-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184795 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184811 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184822 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2d4wz\" (UniqueName: \"kubernetes.io/projected/5441d097-087c-4d9a-baa8-b210afa90fc9-kube-api-access-2d4wz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184831 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d6qdx\" (UniqueName: \"kubernetes.io/projected/87cf06ed-a83f-41a7-828d-70653580a8cb-kube-api-access-d6qdx\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184841 4725 reconciler_common.go:293] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184852 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184863 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e7e6199b-1264-4501-8953-767f51328d08-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184871 4725 reconciler_common.go:293] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/43509403-f426-496e-be36-56cef71462f5-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184880 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184888 4725 reconciler_common.go:293] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184897 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184910 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nzwt7\" (UniqueName: \"kubernetes.io/projected/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-kube-api-access-nzwt7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184919 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qg5z5\" (UniqueName: \"kubernetes.io/projected/43509403-f426-496e-be36-56cef71462f5-kube-api-access-qg5z5\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184927 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184946 4725 reconciler_common.go:293] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/31d8b7a1-420e-4252-a5b7-eebe8a111292-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184954 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6402fda4-df10-493c-b4e5-d0569419652d-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184963 4725 reconciler_common.go:293] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184972 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/0b78653f-4ff9-4508-8672-245ed9b561e3-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184981 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/496e6271-fb68-4057-954e-a0d97a4afa3f-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184989 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5441d097-087c-4d9a-baa8-b210afa90fc9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.184998 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config" (OuterVolumeSpecName: "config") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186452 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config" (OuterVolumeSpecName: "config") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186479 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct" (OuterVolumeSpecName: "kube-api-access-cfbct") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "kube-api-access-cfbct". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186706 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j" (OuterVolumeSpecName: "kube-api-access-w7l8j") pod "01ab3dd5-8196-46d0-ad33-122e2ca51def" (UID: "01ab3dd5-8196-46d0-ad33-122e2ca51def"). InnerVolumeSpecName "kube-api-access-w7l8j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.186951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb" (OuterVolumeSpecName: "kube-api-access-mg5zb") pod "6402fda4-df10-493c-b4e5-d0569419652d" (UID: "6402fda4-df10-493c-b4e5-d0569419652d"). InnerVolumeSpecName "kube-api-access-mg5zb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.185008 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/22c825df-677d-4ca6-82db-3454ed06e783-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187138 4725 reconciler_common.go:293] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/22c825df-677d-4ca6-82db-3454ed06e783-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187148 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187157 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6ea678ab-3438-413e-bfe3-290ae7725660-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187167 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7583ce53-e0fe-4a16-9e4d-50516596a136-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187176 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187188 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xcgwh\" (UniqueName: \"kubernetes.io/projected/fda69060-fa79-4696-b1a6-7980f124bf7c-kube-api-access-xcgwh\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187198 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187208 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/b6312bbd-5731-4ea0-a20f-81d5a57df44a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187217 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6509e943-70c6-444c-bc41-48a544e36fbd-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187227 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pcxfs\" (UniqueName: \"kubernetes.io/projected/9d4552c7-cd75-42dd-8880-30dd377c49a4-kube-api-access-pcxfs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187236 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-6ccd8\" (UniqueName: \"kubernetes.io/projected/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-kube-api-access-6ccd8\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187245 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187254 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/43509403-f426-496e-be36-56cef71462f5-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187262 4725 reconciler_common.go:293] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-key\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187277 4725 reconciler_common.go:293] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/bf126b07-da06-4140-9a57-dfd54fc6b486-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187286 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pj782\" (UniqueName: \"kubernetes.io/projected/b6cd30de-2eeb-49a2-ab40-9167f4560ff5-kube-api-access-pj782\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187295 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187303 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187312 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/87cf06ed-a83f-41a7-828d-70653580a8cb-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187323 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/8f668bae-612b-4b75-9490-919e737c6a3b-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187332 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/1bf7eb37-55a3-4c65-b768-a94c82151e69-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187340 4725 reconciler_common.go:293] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187415 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187473 4725 reconciler_common.go:293] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187486 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/496e6271-fb68-4057-954e-a0d97a4afa3f-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187501 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-jkwtn\" (UniqueName: \"kubernetes.io/projected/5b88f790-22fa-440e-b583-365168c0b23d-kube-api-access-jkwtn\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187515 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zgdk5\" (UniqueName: \"kubernetes.io/projected/31d8b7a1-420e-4252-a5b7-eebe8a111292-kube-api-access-zgdk5\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187527 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqsjt\" (UniqueName: \"kubernetes.io/projected/efdd0498-1daa-4136-9a4a-3b948c2293fc-kube-api-access-fqsjt\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187547 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01ab3dd5-8196-46d0-ad33-122e2ca51def-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187567 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sb6h7\" (UniqueName: \"kubernetes.io/projected/1bf7eb37-55a3-4c65-b768-a94c82151e69-kube-api-access-sb6h7\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187580 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/bf126b07-da06-4140-9a57-dfd54fc6b486-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187594 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/0b78653f-4ff9-4508-8672-245ed9b561e3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187611 4725 reconciler_common.go:293] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59-serviceca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.187810 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.177450 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "96b93a3a-6083-4aea-8eab-fe1aa8245ad9" (UID: "96b93a3a-6083-4aea-8eab-fe1aa8245ad9"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.245744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-ovnkube-identity-cm\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.246624 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-iptables-alerter-script\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.246650 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.246633 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" (UID: "308be0ea-9f5f-4b29-aeb1-5abd31a0b17b"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.247585 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp" (OuterVolumeSpecName: "kube-api-access-ngvvp") pod "49c341d1-5089-4bc2-86a0-a5e165cfcc6b" (UID: "49c341d1-5089-4bc2-86a0-a5e165cfcc6b"). InnerVolumeSpecName "kube-api-access-ngvvp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.247665 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "0b574797-001e-440a-8f4e-c0be86edad0f" (UID: "0b574797-001e-440a-8f4e-c0be86edad0f"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.247881 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "25e176fe-21b4-4974-b1ed-c8b94f112a7f" (UID: "25e176fe-21b4-4974-b1ed-c8b94f112a7f"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.248406 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248415 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca" (OuterVolumeSpecName: "client-ca") pod "7583ce53-e0fe-4a16-9e4d-50516596a136" (UID: "7583ce53-e0fe-4a16-9e4d-50516596a136"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.248463 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.748444813 +0000 UTC m=+22.956766866 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248624 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7539238d-5fe0-46ed-884e-1c3b566537ec" (UID: "7539238d-5fe0-46ed-884e-1c3b566537ec"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248804 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "09efc573-dbb6-4249-bd59-9b87aba8dd28" (UID: "09efc573-dbb6-4249-bd59-9b87aba8dd28"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.248925 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "fda69060-fa79-4696-b1a6-7980f124bf7c" (UID: "fda69060-fa79-4696-b1a6-7980f124bf7c"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.249337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/ef543e1b-8068-4ea3-b32a-61027b32e95d-env-overrides\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.249309 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.249617 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.749482414 +0000 UTC m=+22.957804387 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.250037 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" (UID: "f88749ec-7931-4ee7-b3fc-1ec5e11f92e9"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.250310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ef543e1b-8068-4ea3-b32a-61027b32e95d-webhook-cert\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.250477 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v" (OuterVolumeSpecName: "kube-api-access-pjr6v") pod "49ef4625-1d3a-4a9f-b595-c2433d32326d" (UID: "49ef4625-1d3a-4a9f-b595-c2433d32326d"). InnerVolumeSpecName "kube-api-access-pjr6v". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.251222 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252297 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rdwmf\" (UniqueName: \"kubernetes.io/projected/37a5e44f-9a88-4405-be8a-b645485e7312-kube-api-access-rdwmf\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252749 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m" (OuterVolumeSpecName: "kube-api-access-gf66m") pod "a0128f3a-b052-44ed-a84e-c4c8aaf17c13" (UID: "a0128f3a-b052-44ed-a84e-c4c8aaf17c13"). InnerVolumeSpecName "kube-api-access-gf66m". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.252830 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"f4b27818a5e8e43d0dc095d08835c792","Type":"ContainerStarted","Data":"3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7"} Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253114 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj" (OuterVolumeSpecName: "kube-api-access-4d4hj") pod "3ab1a177-2de0-46d9-b765-d0d0649bb42e" (UID: "3ab1a177-2de0-46d9-b765-d0d0649bb42e"). InnerVolumeSpecName "kube-api-access-4d4hj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253485 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "c03ee662-fb2f-4fc4-a2c1-af487c19d254" (UID: "c03ee662-fb2f-4fc4-a2c1-af487c19d254"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253495 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "7bb08738-c794-4ee8-9972-3a62ca171029" (UID: "7bb08738-c794-4ee8-9972-3a62ca171029"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253568 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "496e6271-fb68-4057-954e-a0d97a4afa3f" (UID: "496e6271-fb68-4057-954e-a0d97a4afa3f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.253796 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.260995 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.266732 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/37a5e44f-9a88-4405-be8a-b645485e7312-metrics-tls\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.268853 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "57a731c4-ef35-47a8-b875-bfb08a7f8011" (UID: "57a731c4-ef35-47a8-b875-bfb08a7f8011"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.269224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.270757 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1d611f23-29be-4491-8495-bee1670e935f" (UID: "1d611f23-29be-4491-8495-bee1670e935f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.277072 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.277855 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291697 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291729 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-hosts-file\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szb2t\" (UniqueName: \"kubernetes.io/projected/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-kube-api-access-szb2t\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291798 4725 reconciler_common.go:293] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291812 4725 reconciler_common.go:293] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/25e176fe-21b4-4974-b1ed-c8b94f112a7f-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291825 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1386a44e-36a2-460c-96d0-0359d2b6f0f5-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291839 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mg5zb\" (UniqueName: \"kubernetes.io/projected/6402fda4-df10-493c-b4e5-d0569419652d-kube-api-access-mg5zb\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291852 4725 reconciler_common.go:293] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/7bb08738-c794-4ee8-9972-3a62ca171029-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291864 4725 reconciler_common.go:293] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/1bf7eb37-55a3-4c65-b768-a94c82151e69-audit\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291875 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1386a44e-36a2-460c-96d0-0359d2b6f0f5-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291885 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291896 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w7l8j\" (UniqueName: \"kubernetes.io/projected/01ab3dd5-8196-46d0-ad33-122e2ca51def-kube-api-access-w7l8j\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291907 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjr6v\" (UniqueName: \"kubernetes.io/projected/49ef4625-1d3a-4a9f-b595-c2433d32326d-kube-api-access-pjr6v\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291918 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/496e6271-fb68-4057-954e-a0d97a4afa3f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291928 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ngvvp\" (UniqueName: \"kubernetes.io/projected/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-kube-api-access-ngvvp\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291940 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4d4hj\" (UniqueName: \"kubernetes.io/projected/3ab1a177-2de0-46d9-b765-d0d0649bb42e-kube-api-access-4d4hj\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291951 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1d611f23-29be-4491-8495-bee1670e935f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291962 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w4xd4\" (UniqueName: \"kubernetes.io/projected/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b-kube-api-access-w4xd4\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291983 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-gf66m\" (UniqueName: \"kubernetes.io/projected/a0128f3a-b052-44ed-a84e-c4c8aaf17c13-kube-api-access-gf66m\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.291995 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/fda69060-fa79-4696-b1a6-7980f124bf7c-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292005 4725 reconciler_common.go:293] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c03ee662-fb2f-4fc4-a2c1-af487c19d254-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292016 4725 reconciler_common.go:293] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292028 4725 reconciler_common.go:293] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/0b574797-001e-440a-8f4e-c0be86edad0f-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292040 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7539238d-5fe0-46ed-884e-1c3b566537ec-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292050 4725 reconciler_common.go:293] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/09efc573-dbb6-4249-bd59-9b87aba8dd28-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292060 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8f668bae-612b-4b75-9490-919e737c6a3b-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292071 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292110 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-279lb\" (UniqueName: \"kubernetes.io/projected/7bb08738-c794-4ee8-9972-3a62ca171029-kube-api-access-279lb\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292122 4725 reconciler_common.go:293] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3ab1a177-2de0-46d9-b765-d0d0649bb42e-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292161 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/bf126b07-da06-4140-9a57-dfd54fc6b486-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292175 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292185 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/7583ce53-e0fe-4a16-9e4d-50516596a136-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292195 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292206 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9d4552c7-cd75-42dd-8880-30dd377c49a4-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292217 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5441d097-087c-4d9a-baa8-b210afa90fc9-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292228 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/57a731c4-ef35-47a8-b875-bfb08a7f8011-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292238 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/8f668bae-612b-4b75-9490-919e737c6a3b-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292248 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/96b93a3a-6083-4aea-8eab-fe1aa8245ad9-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292259 4725 reconciler_common.go:293] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/c03ee662-fb2f-4fc4-a2c1-af487c19d254-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292269 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09efc573-dbb6-4249-bd59-9b87aba8dd28-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292279 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292290 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cfbct\" (UniqueName: \"kubernetes.io/projected/57a731c4-ef35-47a8-b875-bfb08a7f8011-kube-api-access-cfbct\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292303 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bf2bz\" (UniqueName: \"kubernetes.io/projected/1d611f23-29be-4491-8495-bee1670e935f-kube-api-access-bf2bz\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292316 4725 reconciler_common.go:293] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292326 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/09ae3b1a-e8e7-4524-b54b-61eab6f9239a-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292337 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/925f1c65-6136-48ba-85aa-3a3b50560753-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292348 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/49c341d1-5089-4bc2-86a0-a5e165cfcc6b-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292427 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-host-slash\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.292492 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/37a5e44f-9a88-4405-be8a-b645485e7312-host-etc-kube\") pod \"network-operator-58b4c7f79c-55gtf\" (UID: \"37a5e44f-9a88-4405-be8a-b645485e7312\") " pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.305361 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5225d0e4-402f-4861-b410-819f433b1803" (UID: "5225d0e4-402f-4861-b410-819f433b1803"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.316912 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.316946 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.316963 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317028 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.817006915 +0000 UTC m=+23.025328888 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317393 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317424 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317438 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.317493 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:54.817474709 +0000 UTC m=+23.025796682 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.320818 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.335223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rczfb\" (UniqueName: \"kubernetes.io/projected/d75a4c96-2883-4a0b-bab2-0fab2b6c0b49-kube-api-access-rczfb\") pod \"iptables-alerter-4ln5h\" (UID: \"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\") " pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.337794 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2kz5\" (UniqueName: \"kubernetes.io/projected/ef543e1b-8068-4ea3-b32a-61027b32e95d-kube-api-access-s2kz5\") pod \"network-node-identity-vrzqb\" (UID: \"ef543e1b-8068-4ea3-b32a-61027b32e95d\") " pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.393718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-hosts-file\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.393783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-szb2t\" (UniqueName: \"kubernetes.io/projected/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-kube-api-access-szb2t\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.393879 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5225d0e4-402f-4861-b410-819f433b1803-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.394613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-hosts-file\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.505716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-vrzqb" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.545429 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-4ln5h" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.546242 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" Jan 20 11:04:54 crc kubenswrapper[4725]: W0120 11:04:54.623500 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a5e44f_9a88_4405_be8a_b645485e7312.slice/crio-e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4 WatchSource:0}: Error finding container e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4: Status 404 returned error can't find the container with id e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4 Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.626286 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.626877 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.627118 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.627099794 +0000 UTC m=+23.835421767 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.651447 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.654680 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-szb2t\" (UniqueName: \"kubernetes.io/projected/a3acff9b-8c0b-4a8a-b81f-449be15f3aef-kube-api-access-szb2t\") pod \"node-resolver-c9dck\" (UID: \"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\") " pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.701265 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.716096 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.729162 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829444 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829522 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829505231 +0000 UTC m=+24.037827204 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829445 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.829670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829734 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829765 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829756318 +0000 UTC m=+24.038078291 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829731 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829799 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829816 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829831 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829846 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829855 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829854 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829840281 +0000 UTC m=+24.038162254 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: E0120 11:04:54.829882 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:55.829873092 +0000 UTC m=+24.038195065 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.847177 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.856141 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.871160 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.873057 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-c9dck" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.877009 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 23:21:20.966895562 +0000 UTC Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.881633 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-daemon-z2gv8"] Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.882017 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.883920 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.884264 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.884299 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.885546 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.885615 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.886156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.896525 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.912096 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.927984 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930235 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a4c10a0-687d-4b24-b1a9-5aba619c0668-proxy-tls\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930271 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-47wsh\" (UniqueName: \"kubernetes.io/projected/6a4c10a0-687d-4b24-b1a9-5aba619c0668-kube-api-access-47wsh\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a4c10a0-687d-4b24-b1a9-5aba619c0668-mcd-auth-proxy-config\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.930498 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a4c10a0-687d-4b24-b1a9-5aba619c0668-rootfs\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.941675 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01ab3dd5-8196-46d0-ad33-122e2ca51def" path="/var/lib/kubelet/pods/01ab3dd5-8196-46d0-ad33-122e2ca51def/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.942566 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09ae3b1a-e8e7-4524-b54b-61eab6f9239a" path="/var/lib/kubelet/pods/09ae3b1a-e8e7-4524-b54b-61eab6f9239a/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.944869 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.946340 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09efc573-dbb6-4249-bd59-9b87aba8dd28" path="/var/lib/kubelet/pods/09efc573-dbb6-4249-bd59-9b87aba8dd28/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.947488 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b574797-001e-440a-8f4e-c0be86edad0f" path="/var/lib/kubelet/pods/0b574797-001e-440a-8f4e-c0be86edad0f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.948799 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0b78653f-4ff9-4508-8672-245ed9b561e3" path="/var/lib/kubelet/pods/0b78653f-4ff9-4508-8672-245ed9b561e3/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.949480 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1386a44e-36a2-460c-96d0-0359d2b6f0f5" path="/var/lib/kubelet/pods/1386a44e-36a2-460c-96d0-0359d2b6f0f5/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.950146 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1bf7eb37-55a3-4c65-b768-a94c82151e69" path="/var/lib/kubelet/pods/1bf7eb37-55a3-4c65-b768-a94c82151e69/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.951377 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1d611f23-29be-4491-8495-bee1670e935f" path="/var/lib/kubelet/pods/1d611f23-29be-4491-8495-bee1670e935f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.953183 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20b0d48f-5fd6-431c-a545-e3c800c7b866" path="/var/lib/kubelet/pods/20b0d48f-5fd6-431c-a545-e3c800c7b866/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.954465 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c" path="/var/lib/kubelet/pods/210d8245-ebfc-4e3b-ac4a-e21ce76f9a7c/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.955093 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="22c825df-677d-4ca6-82db-3454ed06e783" path="/var/lib/kubelet/pods/22c825df-677d-4ca6-82db-3454ed06e783/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.956593 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="25e176fe-21b4-4974-b1ed-c8b94f112a7f" path="/var/lib/kubelet/pods/25e176fe-21b4-4974-b1ed-c8b94f112a7f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.959606 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="308be0ea-9f5f-4b29-aeb1-5abd31a0b17b" path="/var/lib/kubelet/pods/308be0ea-9f5f-4b29-aeb1-5abd31a0b17b/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.960456 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31d8b7a1-420e-4252-a5b7-eebe8a111292" path="/var/lib/kubelet/pods/31d8b7a1-420e-4252-a5b7-eebe8a111292/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.961207 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ab1a177-2de0-46d9-b765-d0d0649bb42e" path="/var/lib/kubelet/pods/3ab1a177-2de0-46d9-b765-d0d0649bb42e/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.962443 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3cb93b32-e0ae-4377-b9c8-fdb9842c6d59" path="/var/lib/kubelet/pods/3cb93b32-e0ae-4377-b9c8-fdb9842c6d59/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.963308 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="43509403-f426-496e-be36-56cef71462f5" path="/var/lib/kubelet/pods/43509403-f426-496e-be36-56cef71462f5/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.965434 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="44663579-783b-4372-86d6-acf235a62d72" path="/var/lib/kubelet/pods/44663579-783b-4372-86d6-acf235a62d72/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.966144 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="496e6271-fb68-4057-954e-a0d97a4afa3f" path="/var/lib/kubelet/pods/496e6271-fb68-4057-954e-a0d97a4afa3f/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.966936 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49c341d1-5089-4bc2-86a0-a5e165cfcc6b" path="/var/lib/kubelet/pods/49c341d1-5089-4bc2-86a0-a5e165cfcc6b/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.968579 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49ef4625-1d3a-4a9f-b595-c2433d32326d" path="/var/lib/kubelet/pods/49ef4625-1d3a-4a9f-b595-c2433d32326d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.972695 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4bb40260-dbaa-4fb0-84df-5e680505d512" path="/var/lib/kubelet/pods/4bb40260-dbaa-4fb0-84df-5e680505d512/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.973622 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5225d0e4-402f-4861-b410-819f433b1803" path="/var/lib/kubelet/pods/5225d0e4-402f-4861-b410-819f433b1803/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.974508 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5441d097-087c-4d9a-baa8-b210afa90fc9" path="/var/lib/kubelet/pods/5441d097-087c-4d9a-baa8-b210afa90fc9/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.975402 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.975470 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57a731c4-ef35-47a8-b875-bfb08a7f8011" path="/var/lib/kubelet/pods/57a731c4-ef35-47a8-b875-bfb08a7f8011/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.976188 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5b88f790-22fa-440e-b583-365168c0b23d" path="/var/lib/kubelet/pods/5b88f790-22fa-440e-b583-365168c0b23d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.977620 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5fe579f8-e8a6-4643-bce5-a661393c4dde" path="/var/lib/kubelet/pods/5fe579f8-e8a6-4643-bce5-a661393c4dde/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.978127 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6402fda4-df10-493c-b4e5-d0569419652d" path="/var/lib/kubelet/pods/6402fda4-df10-493c-b4e5-d0569419652d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.978818 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6509e943-70c6-444c-bc41-48a544e36fbd" path="/var/lib/kubelet/pods/6509e943-70c6-444c-bc41-48a544e36fbd/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.980287 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6731426b-95fe-49ff-bb5f-40441049fde2" path="/var/lib/kubelet/pods/6731426b-95fe-49ff-bb5f-40441049fde2/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.980805 4725 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.980938 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ea678ab-3438-413e-bfe3-290ae7725660" path="/var/lib/kubelet/pods/6ea678ab-3438-413e-bfe3-290ae7725660/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.983073 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7539238d-5fe0-46ed-884e-1c3b566537ec" path="/var/lib/kubelet/pods/7539238d-5fe0-46ed-884e-1c3b566537ec/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.984553 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7583ce53-e0fe-4a16-9e4d-50516596a136" path="/var/lib/kubelet/pods/7583ce53-e0fe-4a16-9e4d-50516596a136/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.985116 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7bb08738-c794-4ee8-9972-3a62ca171029" path="/var/lib/kubelet/pods/7bb08738-c794-4ee8-9972-3a62ca171029/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.987705 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="87cf06ed-a83f-41a7-828d-70653580a8cb" path="/var/lib/kubelet/pods/87cf06ed-a83f-41a7-828d-70653580a8cb/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.989169 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8cea82b4-6893-4ddc-af9f-1bb5ae425c5b" path="/var/lib/kubelet/pods/8cea82b4-6893-4ddc-af9f-1bb5ae425c5b/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.990313 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="925f1c65-6136-48ba-85aa-3a3b50560753" path="/var/lib/kubelet/pods/925f1c65-6136-48ba-85aa-3a3b50560753/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.992134 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="96b93a3a-6083-4aea-8eab-fe1aa8245ad9" path="/var/lib/kubelet/pods/96b93a3a-6083-4aea-8eab-fe1aa8245ad9/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.993356 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9d4552c7-cd75-42dd-8880-30dd377c49a4" path="/var/lib/kubelet/pods/9d4552c7-cd75-42dd-8880-30dd377c49a4/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.994615 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a0128f3a-b052-44ed-a84e-c4c8aaf17c13" path="/var/lib/kubelet/pods/a0128f3a-b052-44ed-a84e-c4c8aaf17c13/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.995745 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a31745f5-9847-4afe-82a5-3161cc66ca93" path="/var/lib/kubelet/pods/a31745f5-9847-4afe-82a5-3161cc66ca93/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.997559 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b11524ee-3fca-4b1b-9cdf-6da289fdbc7d" path="/var/lib/kubelet/pods/b11524ee-3fca-4b1b-9cdf-6da289fdbc7d/volumes" Jan 20 11:04:54 crc kubenswrapper[4725]: I0120 11:04:54.998713 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6312bbd-5731-4ea0-a20f-81d5a57df44a" path="/var/lib/kubelet/pods/b6312bbd-5731-4ea0-a20f-81d5a57df44a/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.002653 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b6cd30de-2eeb-49a2-ab40-9167f4560ff5" path="/var/lib/kubelet/pods/b6cd30de-2eeb-49a2-ab40-9167f4560ff5/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.004350 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bc5039c0-ea34-426b-a2b7-fbbc87b49a6d" path="/var/lib/kubelet/pods/bc5039c0-ea34-426b-a2b7-fbbc87b49a6d/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.009164 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bd23aa5c-e532-4e53-bccf-e79f130c5ae8" path="/var/lib/kubelet/pods/bd23aa5c-e532-4e53-bccf-e79f130c5ae8/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.011768 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bf126b07-da06-4140-9a57-dfd54fc6b486" path="/var/lib/kubelet/pods/bf126b07-da06-4140-9a57-dfd54fc6b486/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.013914 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c03ee662-fb2f-4fc4-a2c1-af487c19d254" path="/var/lib/kubelet/pods/c03ee662-fb2f-4fc4-a2c1-af487c19d254/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.015022 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d" path="/var/lib/kubelet/pods/cd70aa09-68dd-4d64-bd6f-156fe6d1dc6d/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.016396 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e7e6199b-1264-4501-8953-767f51328d08" path="/var/lib/kubelet/pods/e7e6199b-1264-4501-8953-767f51328d08/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.017668 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="efdd0498-1daa-4136-9a4a-3b948c2293fc" path="/var/lib/kubelet/pods/efdd0498-1daa-4136-9a4a-3b948c2293fc/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.018752 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f88749ec-7931-4ee7-b3fc-1ec5e11f92e9" path="/var/lib/kubelet/pods/f88749ec-7931-4ee7-b3fc-1ec5e11f92e9/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.020538 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fda69060-fa79-4696-b1a6-7980f124bf7c" path="/var/lib/kubelet/pods/fda69060-fa79-4696-b1a6-7980f124bf7c/volumes" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031606 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a4c10a0-687d-4b24-b1a9-5aba619c0668-rootfs\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031720 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a4c10a0-687d-4b24-b1a9-5aba619c0668-proxy-tls\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031772 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-47wsh\" (UniqueName: \"kubernetes.io/projected/6a4c10a0-687d-4b24-b1a9-5aba619c0668-kube-api-access-47wsh\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.031825 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a4c10a0-687d-4b24-b1a9-5aba619c0668-mcd-auth-proxy-config\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.033738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/6a4c10a0-687d-4b24-b1a9-5aba619c0668-mcd-auth-proxy-config\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.033863 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/6a4c10a0-687d-4b24-b1a9-5aba619c0668-rootfs\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.040390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/6a4c10a0-687d-4b24-b1a9-5aba619c0668-proxy-tls\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.139459 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.163660 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-47wsh\" (UniqueName: \"kubernetes.io/projected/6a4c10a0-687d-4b24-b1a9-5aba619c0668-kube-api-access-47wsh\") pod \"machine-config-daemon-z2gv8\" (UID: \"6a4c10a0-687d-4b24-b1a9-5aba619c0668\") " pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.168385 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.222256 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.229250 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.258093 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.259717 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.258876 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270375 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270543 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270625 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270791 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270901 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.270967 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.271002 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.271332 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-vchwb"] Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.271537 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-additional-cni-plugins-z7f69"] Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.272220 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.272476 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.285306 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c9dck" event={"ID":"a3acff9b-8c0b-4a8a-b81f-449be15f3aef","Type":"ContainerStarted","Data":"6c3f9addd3c4256b3c39a76dba36771cc8c2f4ec5d1302bf9430f42ebedeffd9"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.288154 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.288194 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" event={"ID":"37a5e44f-9a88-4405-be8a-b645485e7312","Type":"ContainerStarted","Data":"e7af74551e713f97d5ad3d4402f56eb182a24f58af986f27c8d7d37acdec47d4"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.290639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"f4495cb2afb253ce59d4073c3d3eb7d2e4b170d9dd03dbd86043d5f30460c780"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.294618 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.296264 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.298447 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.298655 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.298746 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.299449 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"ac496f5dd6638280d62a86ee01e73bd5a039738c60595ff3ab669f5436863a26"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.300754 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.300770 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.302511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.302566 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"c54b5992f3ffa538b3496e7eb0c81380a4563755475136c9c8892df1c3100765"} Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.367474 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-etc-kubernetes\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458567 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458586 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458605 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458641 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-os-release\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-cni-binary-copy\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458669 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458687 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458703 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-kubelet\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458718 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-socket-dir-parent\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458733 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-os-release\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458749 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458779 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-daemon-config\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458797 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-system-cni-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458846 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-cnibin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458874 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-hostroot\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458890 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jbnsp\" (UniqueName: \"kubernetes.io/projected/627f7c97-4173-413f-a90e-e2c5e058c53b-kube-api-access-jbnsp\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-system-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458922 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-k8s-cni-cncf-io\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458955 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cnibin\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458969 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458983 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.458997 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-netns\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-bin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459060 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459089 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-multus\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459119 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459154 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459186 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-multus-certs\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459239 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459277 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q68t4\" (UniqueName: \"kubernetes.io/projected/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-kube-api-access-q68t4\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459349 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.459398 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-conf-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572286 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-daemon-config\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572611 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572658 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-system-cni-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572688 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-cnibin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572702 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-hostroot\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572717 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jbnsp\" (UniqueName: \"kubernetes.io/projected/627f7c97-4173-413f-a90e-e2c5e058c53b-kube-api-access-jbnsp\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572733 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-system-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-k8s-cni-cncf-io\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572763 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cnibin\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572804 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572833 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-netns\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572868 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-bin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572882 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572911 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-multus\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572953 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-multus-certs\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.572990 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q68t4\" (UniqueName: \"kubernetes.io/projected/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-kube-api-access-q68t4\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573006 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573034 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573066 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573116 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573136 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-conf-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-etc-kubernetes\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573169 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573212 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573232 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573247 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-os-release\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-cni-binary-copy\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573287 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573301 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-kubelet\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573327 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-socket-dir-parent\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573341 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-os-release\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573354 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573837 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-binary-copy\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573896 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573935 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-system-cni-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-cnibin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574065 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-hostroot\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574155 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574347 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-system-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-k8s-cni-cncf-io\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574411 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574434 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cnibin\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574460 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574615 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-conf-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574688 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-multus\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574715 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574725 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-tuning-conf-dir\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574737 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-multus-certs\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574766 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-run-netns\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574823 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-cni-bin\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-os-release\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.574977 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575031 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-etc-kubernetes\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575113 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.573192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-daemon-config\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/627f7c97-4173-413f-a90e-e2c5e058c53b-cni-binary-copy\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575587 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575633 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575696 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-socket-dir-parent\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575716 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-multus-cni-dir\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575722 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/627f7c97-4173-413f-a90e-e2c5e058c53b-host-var-lib-kubelet\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-os-release\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.575885 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.576271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.599800 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.711141 4725 kubelet_node_status.go:401] "Setting node annotation to enable volume controller attach/detach" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.715133 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:55 crc kubenswrapper[4725]: E0120 11:04:55.715318 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:04:57.715293226 +0000 UTC m=+25.923615199 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717532 4725 kubelet_node_status.go:76] "Attempting to register node" node="crc" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.717768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jbnsp\" (UniqueName: \"kubernetes.io/projected/627f7c97-4173-413f-a90e-e2c5e058c53b-kube-api-access-jbnsp\") pod \"multus-vchwb\" (UID: \"627f7c97-4173-413f-a90e-e2c5e058c53b\") " pod="openshift-multus/multus-vchwb" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.719481 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"ovnkube-node-nz9p5\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.720325 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q68t4\" (UniqueName: \"kubernetes.io/projected/4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0-kube-api-access-q68t4\") pod \"multus-additional-cni-plugins-z7f69\" (UID: \"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\") " pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.735940 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.746425 4725 kubelet_node_status.go:115] "Node was previously registered" node="crc" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.746709 4725 kubelet_node_status.go:79] "Successfully registered node" node="crc" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:55 crc kubenswrapper[4725]: I0120 11:04:55.747803 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:55Z","lastTransitionTime":"2026-01-20T11:04:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.061976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-z7f69" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.062039 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.062967 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063123 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 20:51:20.430914914 +0000 UTC Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063155 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063169 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-vchwb" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063142 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063225 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063249 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063286 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063323 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063345 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.063390 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063367 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063449 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063461 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063470 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063476 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.063455055 +0000 UTC m=+26.271777028 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063521 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.063510237 +0000 UTC m=+26.271832210 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063549 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063564 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063575 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.063625 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.06360979 +0000 UTC m=+26.271931803 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:56 crc kubenswrapper[4725]: W0120 11:04:56.091641 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9143f3c2_a068_494d_b7e1_4200c04394a3.slice/crio-841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2 WatchSource:0}: Error finding container 841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2: Status 404 returned error can't find the container with id 841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2 Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.139558 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150508 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150859 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.150883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.189398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.190269 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.190785 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:04:58.190570679 +0000 UTC m=+26.398892652 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.201549 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.228316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.235381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.409934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"b2417ad0dc80b5b1ae4121d1bb3e00865d148a8b7a5961fa3babe151601b99d7"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.416263 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"293cb950a6f3068b98caed1152bca23ce692d80ad5274feae968cc50159c725f"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.417860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.430889 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.431356 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.443610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.444730 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" event={"ID":"ef543e1b-8068-4ea3-b32a-61027b32e95d","Type":"ContainerStarted","Data":"dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.456188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.457682 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.460843 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-c9dck" event={"ID":"a3acff9b-8c0b-4a8a-b81f-449be15f3aef","Type":"ContainerStarted","Data":"18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07"} Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.463677 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.472223 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.474054 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.483486 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: E0120 11:04:56.483601 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.485863 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.590395 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.597506 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.609588 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.734414 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.749305 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.775882 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.858752 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.865553 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.895925 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.906258 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.914686 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.926263 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.936281 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.946213 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.957110 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.961834 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:56Z","lastTransitionTime":"2026-01-20T11:04:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.969326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 20 11:04:56 crc kubenswrapper[4725]: I0120 11:04:56.989756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:56Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.002010 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.025306 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.053472 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.067523 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 15:42:01.327327958 +0000 UTC Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.069786 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.073899 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.105196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.122142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.140143 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.152315 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.164777 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.172316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.198038 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.239422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.264131 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.275250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.377952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.377995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.378019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.378037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.378049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.476053 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.477262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.480364 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.481051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.481876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.481980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.482375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.482755 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.482525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" event={"ID":"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49","Type":"ContainerStarted","Data":"a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.483712 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" exitCode=0 Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.483770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.569617 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.590213 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.615210 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.633160 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/node-ca-fv2jh"] Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.633563 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.635711 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.635912 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.636198 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.636892 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.637778 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.663190 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.702793 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.742802 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.758191 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771571 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771702 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh4k2\" (UniqueName: \"kubernetes.io/projected/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-kube-api-access-rh4k2\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.771734 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:01.771693552 +0000 UTC m=+29.980015585 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-serviceca\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.771822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-host\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.779072 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.791733 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.808835 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.822995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.838955 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.846923 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.855059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.870119 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872712 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh4k2\" (UniqueName: \"kubernetes.io/projected/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-kube-api-access-rh4k2\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872759 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-serviceca\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-host\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.872903 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-host\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.874386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-serviceca\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.883824 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.895208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh4k2\" (UniqueName: \"kubernetes.io/projected/a3fffa1c-6d54-432d-9090-da67cd8ca2ee-kube-api-access-rh4k2\") pod \"node-ca-fv2jh\" (UID: \"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\") " pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.909607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.932153 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.932284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.932699 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.932771 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.932834 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:57 crc kubenswrapper[4725]: E0120 11:04:57.932906 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.952805 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:57Z","lastTransitionTime":"2026-01-20T11:04:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.975501 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:57 crc kubenswrapper[4725]: I0120 11:04:57.979111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-fv2jh" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.002210 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:57Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.035610 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.048818 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.055430 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.069434 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-12 21:47:49.869167134 +0000 UTC Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.071352 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.075026 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.075072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.075178 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075346 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075367 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075380 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075431 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.07541384 +0000 UTC m=+30.283735813 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075734 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075793 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.075775591 +0000 UTC m=+30.284097604 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075864 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075891 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075903 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.075943 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.075932226 +0000 UTC m=+30.284254249 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.092398 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.106159 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.120217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.130931 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.141984 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.154610 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.157535 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.167980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.180942 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.194612 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.260894 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.278671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.278888 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: E0120 11:04:58.279003 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:02.278980692 +0000 UTC m=+30.487302685 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365578 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.365608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.468373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.497539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fv2jh" event={"ID":"a3fffa1c-6d54-432d-9090-da67cd8ca2ee","Type":"ContainerStarted","Data":"50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.497607 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-fv2jh" event={"ID":"a3fffa1c-6d54-432d-9090-da67cd8ca2ee","Type":"ContainerStarted","Data":"23077b4603f9d9f7226353bc7284da75ee15fe39826b9d621fa4231e9b413fb4"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.500272 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf" exitCode=0 Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.500352 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.506019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.506057 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.506069 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.525589 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.543434 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.589410 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.615968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.620758 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.652252 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.707739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.777144 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.779800 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.879390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:58Z","lastTransitionTime":"2026-01-20T11:04:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.883349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:58 crc kubenswrapper[4725]: I0120 11:04:58.946851 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.000186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.002187 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.043037 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.070027 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-17 11:46:46.057429451 +0000 UTC Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.076674 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.110913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.122394 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.138451 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.255490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.269139 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.282146 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.291903 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.301286 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.321933 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.337803 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.356295 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.361958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.377821 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.402439 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.425885 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.442630 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.457103 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.458585 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.469703 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.484120 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.566113 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.570836 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.570872 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.570881 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.573936 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.574716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.588552 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.602471 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.621898 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.648504 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.670709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.728826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.745148 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.758386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.771089 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.773957 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.791338 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.813020 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.827721 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.841852 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.852164 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.861757 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.875486 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:04:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.876730 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.932138 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.932178 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.932139 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:04:59 crc kubenswrapper[4725]: E0120 11:04:59.932281 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:04:59 crc kubenswrapper[4725]: E0120 11:04:59.932330 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:04:59 crc kubenswrapper[4725]: E0120 11:04:59.932398 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:04:59 crc kubenswrapper[4725]: I0120 11:04:59.979158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:04:59Z","lastTransitionTime":"2026-01-20T11:04:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.070359 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 12:59:18.355067544 +0000 UTC Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.081845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.187332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.290764 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.393629 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.495793 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.589949 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521" exitCode=0 Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.590069 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.598873 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.611190 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.628939 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.647740 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.663704 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.680264 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.690946 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.704116 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.707363 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.733568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.751542 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.768023 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.782100 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.796678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.809988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.810000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.815303 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.826646 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.841533 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:00Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:00 crc kubenswrapper[4725]: I0120 11:05:00.912688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:00Z","lastTransitionTime":"2026-01-20T11:05:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016661 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.016751 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.070804 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:46:57.780160046 +0000 UTC Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.118392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.221174 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.323275 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.426201 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.529216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.594963 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb" exitCode=0 Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.595020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.620238 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.631556 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.638327 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.650741 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.662814 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.680322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.716760 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.738110 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.810145 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.810375 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:09.810352815 +0000 UTC m=+38.018674788 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.831851 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.841199 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.846288 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.855027 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.864496 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.875512 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.892345 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.916425 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.931731 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.931795 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.931863 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.931922 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.931739 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:01 crc kubenswrapper[4725]: E0120 11:05:01.932000 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.938334 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.943929 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:01Z","lastTransitionTime":"2026-01-20T11:05:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:01 crc kubenswrapper[4725]: I0120 11:05:01.961849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.046669 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.071172 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 05:50:40.08269041 +0000 UTC Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.113022 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.113065 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.113097 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113197 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113249 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.113236498 +0000 UTC m=+38.321558471 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113304 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113335 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113347 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113400 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.113383483 +0000 UTC m=+38.321705456 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113463 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113508 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113524 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.113608 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.113587999 +0000 UTC m=+38.321910042 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.149255 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.251667 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.355170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.355409 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: E0120 11:05:02.355567 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:10.355516749 +0000 UTC m=+38.563838732 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.357728 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.461974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.462590 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.566584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.607245 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c" exitCode=0 Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.607439 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.616261 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.637540 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.669759 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.678862 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.696958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.720179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.734175 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.766034 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.771264 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.778426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.790025 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.800562 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.811265 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.822277 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.834868 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.844663 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.860064 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.872530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.873676 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.962345 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.977482 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:02Z","lastTransitionTime":"2026-01-20T11:05:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.978843 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:02 crc kubenswrapper[4725]: I0120 11:05:02.995142 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.012532 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.027765 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.042917 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.058714 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.070391 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.071300 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 03:11:58.200343424 +0000 UTC Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.079908 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.081117 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.093223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.104560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.197870 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.210600 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.231834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.248071 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.265899 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.300144 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.402574 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.498633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.514226 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.528561 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.542241 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.555827 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.573981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.585962 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.598443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.610337 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.630676 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.632679 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.643981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.765160 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.766255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.779381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.790859 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.802493 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.814446 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.827108 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.836484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.851528 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.868988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.869050 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.884714 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.900507 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.914750 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.926358 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.931438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.931475 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.931491 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:03 crc kubenswrapper[4725]: E0120 11:05:03.931637 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:03 crc kubenswrapper[4725]: E0120 11:05:03.931688 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:03 crc kubenswrapper[4725]: E0120 11:05:03.931770 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.940652 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.951364 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.961893 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.970951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.970994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.971005 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.971019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.971028 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:03Z","lastTransitionTime":"2026-01-20T11:05:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.973768 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.983985 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:03 crc kubenswrapper[4725]: I0120 11:05:03.998797 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.012361 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.032511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.047862 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.066203 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.072685 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-13 21:59:50.374556764 +0000 UTC Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.073349 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.176240 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.322508 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.424517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.526465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.529691 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.632989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633042 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.633157 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.642061 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64" exitCode=0 Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.642179 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.649223 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.649733 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.649807 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.670368 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.696789 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.707627 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.739583 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.744248 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.752688 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.770383 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.859186 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.867678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.884935 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.897912 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.908942 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.948814 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.962055 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972662 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.972693 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:04Z","lastTransitionTime":"2026-01-20T11:05:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.982526 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:04 crc kubenswrapper[4725]: I0120 11:05:04.998650 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:04Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.014363 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.025766 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.040946 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.057924 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.073256 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-20 18:30:47.178010685 +0000 UTC Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.075674 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.079264 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.097560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.116304 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.129543 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.146986 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.160569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.177989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.178000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.181406 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.202321 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with incomplete status: [whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.216713 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.275589 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.282661 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.291967 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.301185 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.311692 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.338633 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.384565 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.486612 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.552720 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.589381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.656487 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0" containerID="838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906" exitCode=0 Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.656670 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.670411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerDied","Data":"838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.684162 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.700769 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.709719 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.715467 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.739027 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.754811 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.768586 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.790154 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.816025 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.819258 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.843895 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.862716 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.879170 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.891339 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.907503 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.937307 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.937540 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:05 crc kubenswrapper[4725]: E0120 11:05:05.937672 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.937742 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:05 crc kubenswrapper[4725]: E0120 11:05:05.937799 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:05 crc kubenswrapper[4725]: E0120 11:05:05.938312 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.938461 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:05Z","lastTransitionTime":"2026-01-20T11:05:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.945186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:05 crc kubenswrapper[4725]: I0120 11:05:05.957291 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:05Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.041344 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.073528 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-07 00:43:50.027797823 +0000 UTC Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.143968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.144132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.247367 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.350341 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.453976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.454158 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.557996 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.607468 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.633753 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.640469 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.655705 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.661608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.664369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" event={"ID":"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0","Type":"ContainerStarted","Data":"ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.664538 4725 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.678034 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.678706 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.683727 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.693826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.699355 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.702961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.702998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.703008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.703024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.703033 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.736947 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: E0120 11:05:06.737163 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.738935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.739514 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.742919 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.762395 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.807761 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.821554 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.836607 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.841360 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.848053 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.858445 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.873049 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.932916 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.944800 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:06Z","lastTransitionTime":"2026-01-20T11:05:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.948054 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.961201 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.978761 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:06 crc kubenswrapper[4725]: I0120 11:05:06.988783 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:06Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.047491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.074364 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 02:51:23.40814568 +0000 UTC Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.149829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.150427 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.253858 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356512 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.356530 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.459387 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.561492 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.663986 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.766754 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.869631 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.918240 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.931220 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:07 crc kubenswrapper[4725]: E0120 11:05:07.931364 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.931739 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:07 crc kubenswrapper[4725]: E0120 11:05:07.931802 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.931848 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:07 crc kubenswrapper[4725]: E0120 11:05:07.931899 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:07 crc kubenswrapper[4725]: I0120 11:05:07.972302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:07Z","lastTransitionTime":"2026-01-20T11:05:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.074979 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 02:01:04.36182691 +0000 UTC Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075773 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.075976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.076084 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.178422 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.281951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.384995 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.487976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.488191 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.591379 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.693949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694017 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.694048 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.797267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:08 crc kubenswrapper[4725]: I0120 11:05:08.900551 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:08Z","lastTransitionTime":"2026-01-20T11:05:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.003776 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.076816 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 01:34:28.401723752 +0000 UTC Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.106641 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.210432 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.313984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.314056 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416908 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.416919 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.520228 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.623616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.676422 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/0.log" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.680393 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4" exitCode=1 Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.680451 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.681574 4725 scope.go:117] "RemoveContainer" containerID="017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.701775 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.718955 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.725847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.732853 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.751360 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.753497 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r"] Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.754294 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.757683 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.757960 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.770697 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.787989 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.807524 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.821603 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.827580 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834102 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.834248 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:25.83423025 +0000 UTC m=+54.042552223 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834313 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834341 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de4324f-3428-4409-92a4-940e5b94fe12-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834365 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.834449 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bfkbm\" (UniqueName: \"kubernetes.io/projected/6de4324f-3428-4409-92a4-940e5b94fe12-kube-api-access-bfkbm\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.839460 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.851436 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.860738 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.869836 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.884834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.897894 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.911848 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930181 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.930928 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:09Z","lastTransitionTime":"2026-01-20T11:05:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.931192 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.931216 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.931305 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.931201 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.931403 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:09 crc kubenswrapper[4725]: E0120 11:05:09.931468 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de4324f-3428-4409-92a4-940e5b94fe12-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936380 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.936439 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bfkbm\" (UniqueName: \"kubernetes.io/projected/6de4324f-3428-4409-92a4-940e5b94fe12-kube-api-access-bfkbm\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.937258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-env-overrides\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.937336 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/6de4324f-3428-4409-92a4-940e5b94fe12-ovnkube-config\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.943928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/6de4324f-3428-4409-92a4-940e5b94fe12-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.944294 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.954756 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bfkbm\" (UniqueName: \"kubernetes.io/projected/6de4324f-3428-4409-92a4-940e5b94fe12-kube-api-access-bfkbm\") pod \"ovnkube-control-plane-749d76644c-8ls4r\" (UID: \"6de4324f-3428-4409-92a4-940e5b94fe12\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.955708 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.969670 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.981821 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:09 crc kubenswrapper[4725]: I0120 11:05:09.992621 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:09Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.009019 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.024917 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.033570 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.039805 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.052670 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.067706 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.077789 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-16 08:01:18.033942352 +0000 UTC Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.085505 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.096229 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.112206 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.128739 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.137858 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.138385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.138409 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.138431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138537 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138590 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.138575647 +0000 UTC m=+54.346897620 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138921 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138945 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138961 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.138994 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.138983949 +0000 UTC m=+54.347305922 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139044 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139054 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139060 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.139087 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.139079412 +0000 UTC m=+54.347401385 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.155834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.168498 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240081 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.240110 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.342618 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.440484 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.440648 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.440712 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.440697999 +0000 UTC m=+54.649019972 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.444999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.445099 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.548692 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.712570 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.717911 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/0.log" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.722337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.723450 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.723864 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/network-metrics-daemon-5lfc4"] Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.724349 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.724414 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.729266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" event={"ID":"6de4324f-3428-4409-92a4-940e5b94fe12","Type":"ContainerStarted","Data":"cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.729319 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" event={"ID":"6de4324f-3428-4409-92a4-940e5b94fe12","Type":"ContainerStarted","Data":"abb81a1095b54a94c5a5182c1e9a6a73268fc43c55e54d3c0707e2ded1786f3b"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.739172 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.760976 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.773995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.799592 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.807403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lljhl\" (UniqueName: \"kubernetes.io/projected/a5d55efc-e85a-4a02-a4ce-7355df9fea66-kube-api-access-lljhl\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.808344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.812867 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814712 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.814735 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.827019 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.840403 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.854080 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.871165 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.885406 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.900499 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.909035 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lljhl\" (UniqueName: \"kubernetes.io/projected/a5d55efc-e85a-4a02-a4ce-7355df9fea66-kube-api-access-lljhl\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.909196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.909343 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: E0120 11:05:10.909411 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:11.409390093 +0000 UTC m=+39.617712106 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.913156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.917892 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:10Z","lastTransitionTime":"2026-01-20T11:05:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.926210 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.927595 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lljhl\" (UniqueName: \"kubernetes.io/projected/a5d55efc-e85a-4a02-a4ce-7355df9fea66-kube-api-access-lljhl\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.940715 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.955220 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.968503 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.983422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:10 crc kubenswrapper[4725]: I0120 11:05:10.996071 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:10Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.010232 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020074 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020104 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.020115 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.022076 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.034474 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.049585 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.060731 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.072422 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.078068 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-10 01:06:15.114095204 +0000 UTC Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.083464 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.094349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.106668 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.119526 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.121966 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.132909 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.147455 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.169826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.186649 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.221017 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.223846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328649 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.328662 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.413961 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.414182 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.414255 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:12.41423681 +0000 UTC m=+40.622558793 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.431862 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.534601 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.637981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.638209 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.736542 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.737531 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/0.log" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.740901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.740952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.740975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.741002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.741023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.741957 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" exitCode=1 Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.742029 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.742083 4725 scope.go:117] "RemoveContainer" containerID="017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.743827 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.744326 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.749212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" event={"ID":"6de4324f-3428-4409-92a4-940e5b94fe12","Type":"ContainerStarted","Data":"94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.761140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.786856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.803036 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.821468 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.834322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.843733 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.847245 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.860284 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.870534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.882510 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.898181 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.911587 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.925569 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.931742 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.931769 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.931877 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.931890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.932021 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:11 crc kubenswrapper[4725]: E0120 11:05:11.932175 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.935619 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:11Z","lastTransitionTime":"2026-01-20T11:05:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.946561 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.959920 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.970759 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.980998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:11 crc kubenswrapper[4725]: I0120 11:05:11.996337 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.014626 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.035354 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.049642 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.065845 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://017096767eeb1b40073e7f4de6ddf7d02039af4f8fea06aa06f0b15faaaffff4\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"message\\\":\\\"0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/adminpolicybasedroute/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603752 5931 reflector.go:311] Stopping reflector *v1.EgressService (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressservice/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603803 5931 reflector.go:311] Stopping reflector *v1.EgressQoS (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressqos/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.603923 5931 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604309 5931 reflector.go:311] Stopping reflector *v1.EgressIP (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressip/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:08.604414 5931 reflector.go:311] Stopping reflector *v1alpha1.AdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/factory.go:141\\\\nI0120 11:05:08.604458 5931 reflector.go:311] Stopping reflector *v1alpha1.BaselineAdminNetworkPolicy (0s) from sigs.k8s.io/network-policy-api/pkg/client/informers/externalversions/f\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:04Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.079037 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 20:01:24.124799238 +0000 UTC Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.087842 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.114222 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.134088 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153150 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153168 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.153213 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.157342 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.185350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.209146 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.229127 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.248322 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.256643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.263427 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.277849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.292807 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.310621 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.329863 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.359385 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.425915 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.426212 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.426339 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:14.426307845 +0000 UTC m=+42.634629858 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.462745 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.565685 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.668801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.756333 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.760337 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.760487 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.770845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.778971 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.793568 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.808146 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.832779 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.847341 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.867003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873256 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.873904 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.881595 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.896267 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.910929 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.922980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.931369 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:12 crc kubenswrapper[4725]: E0120 11:05:12.931560 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.943700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.959018 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.972567 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.976852 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:12Z","lastTransitionTime":"2026-01-20T11:05:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.984931 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:12 crc kubenswrapper[4725]: I0120 11:05:12.997069 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.008534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.019005 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.031132 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.042874 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.053432 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.064269 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079116 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.079195 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-27 04:50:32.574656554 +0000 UTC Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.080937 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.092379 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.105403 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.122962 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.136534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.154940 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.170573 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.181994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.182006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.186653 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.203913 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.234194 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.247700 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.260771 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.270918 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.283765 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.390446 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.493203 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595455 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.595517 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.699685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700077 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.700125 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.803899 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.906999 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.907011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:13Z","lastTransitionTime":"2026-01-20T11:05:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.936937 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:13 crc kubenswrapper[4725]: E0120 11:05:13.937650 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.936990 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:13 crc kubenswrapper[4725]: E0120 11:05:13.937882 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:13 crc kubenswrapper[4725]: I0120 11:05:13.936926 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:13 crc kubenswrapper[4725]: E0120 11:05:13.938328 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011025 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011069 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.011134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.079735 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 19:20:43.794980762 +0000 UTC Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.113938 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.216555 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.318946 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.422926 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.448249 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:14 crc kubenswrapper[4725]: E0120 11:05:14.448498 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:14 crc kubenswrapper[4725]: E0120 11:05:14.448596 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:18.448576264 +0000 UTC m=+46.656898247 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.526758 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.629784 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732884 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732907 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.732940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.733032 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.836506 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.934666 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:14 crc kubenswrapper[4725]: E0120 11:05:14.934936 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.940744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:14 crc kubenswrapper[4725]: I0120 11:05:14.941440 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:14Z","lastTransitionTime":"2026-01-20T11:05:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.043870 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.080917 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-10 10:52:22.249678218 +0000 UTC Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.147648 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.252344 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.355946 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460449 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460467 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.460511 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.562978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563042 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563058 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.563126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.665849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.772630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.876401 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.932485 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.932485 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.932519 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:15 crc kubenswrapper[4725]: E0120 11:05:15.932812 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:15 crc kubenswrapper[4725]: E0120 11:05:15.932903 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:15 crc kubenswrapper[4725]: E0120 11:05:15.933058 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978735 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:15 crc kubenswrapper[4725]: I0120 11:05:15.978777 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:15Z","lastTransitionTime":"2026-01-20T11:05:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.081018 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-18 09:30:47.537260507 +0000 UTC Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082374 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.082390 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.186486 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.289346 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.392941 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.496785 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.598976 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.701502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.701803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.701896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.702001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.702108 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.805271 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.907615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:16Z","lastTransitionTime":"2026-01-20T11:05:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:16 crc kubenswrapper[4725]: I0120 11:05:16.932390 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:16 crc kubenswrapper[4725]: E0120 11:05:16.932598 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.014327 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.081755 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:50:51.699149979 +0000 UTC Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.084210 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.103294 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108535 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.108614 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.132492 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.137759 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.161442 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.166953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.167126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.183758 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.187844 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.201516 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:17Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:17Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.201695 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.203412 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.306698 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409874 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.409912 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.515750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619146 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.619159 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.738433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842483 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.842587 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.931878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.932142 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.932133 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.932147 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.932290 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:17 crc kubenswrapper[4725]: E0120 11:05:17.932427 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:17 crc kubenswrapper[4725]: I0120 11:05:17.946199 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:17Z","lastTransitionTime":"2026-01-20T11:05:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.048986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.049136 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.082792 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-25 06:09:41.561282613 +0000 UTC Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.152288 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.254801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.357239 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.460150 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.544674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:18 crc kubenswrapper[4725]: E0120 11:05:18.544885 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:18 crc kubenswrapper[4725]: E0120 11:05:18.544957 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:26.544941812 +0000 UTC m=+54.753263785 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.562268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665079 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665114 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.665163 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767887 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.767905 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870826 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.870869 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.932218 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:18 crc kubenswrapper[4725]: E0120 11:05:18.932521 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:18 crc kubenswrapper[4725]: I0120 11:05:18.974290 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:18Z","lastTransitionTime":"2026-01-20T11:05:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.077297 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.083362 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-08 19:15:47.220643713 +0000 UTC Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.180603 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.282922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.282998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.283012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.283040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.283052 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.386303 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.491600 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.594251 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.697761 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.800981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801135 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.801233 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.904828 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:19Z","lastTransitionTime":"2026-01-20T11:05:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.931968 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.932014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:19 crc kubenswrapper[4725]: I0120 11:05:19.932007 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:19 crc kubenswrapper[4725]: E0120 11:05:19.932332 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:19 crc kubenswrapper[4725]: E0120 11:05:19.932437 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:19 crc kubenswrapper[4725]: E0120 11:05:19.932683 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.007372 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.084149 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 19:04:58.719384255 +0000 UTC Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111923 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.111935 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.215965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.216306 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.319893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.320169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.423769 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527357 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.527416 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.630928 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735540 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.735565 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838495 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.838538 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.931452 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:20 crc kubenswrapper[4725]: E0120 11:05:20.931677 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.940994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:20 crc kubenswrapper[4725]: I0120 11:05:20.941006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:20Z","lastTransitionTime":"2026-01-20T11:05:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.044460 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.084934 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 14:57:57.000148602 +0000 UTC Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.148555 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.252230 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.355615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459560 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.459573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.562847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.563435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.666429 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.769883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.873996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.874020 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.931625 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.931646 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.931846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:21 crc kubenswrapper[4725]: E0120 11:05:21.932040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:21 crc kubenswrapper[4725]: E0120 11:05:21.932223 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:21 crc kubenswrapper[4725]: E0120 11:05:21.932363 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:21 crc kubenswrapper[4725]: I0120 11:05:21.978302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:21Z","lastTransitionTime":"2026-01-20T11:05:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.081277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.085967 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-19 12:01:48.667504941 +0000 UTC Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.184166 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.287138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.390292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.493532 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.595907 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.698568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.699732 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.802810 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.905558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.906126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.906331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.906530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.925969 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:22Z","lastTransitionTime":"2026-01-20T11:05:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.931886 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:22 crc kubenswrapper[4725]: E0120 11:05:22.932195 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.950882 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.973167 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:22 crc kubenswrapper[4725]: I0120 11:05:22.994168 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.017332 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.028196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.045546 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.061204 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.084712 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.086746 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 13:38:21.552269967 +0000 UTC Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.103043 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.118426 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130157 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.130216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.140386 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.153482 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.167072 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.185372 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.203339 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.221684 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.232489 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.237656 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.251856 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.335546 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.437819 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.540914 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.644444 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.746849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.849787 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.931607 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.931825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.931998 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:23 crc kubenswrapper[4725]: E0120 11:05:23.931985 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:23 crc kubenswrapper[4725]: E0120 11:05:23.932202 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:23 crc kubenswrapper[4725]: E0120 11:05:23.932334 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952792 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:23 crc kubenswrapper[4725]: I0120 11:05:23.952824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:23Z","lastTransitionTime":"2026-01-20T11:05:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055782 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.055845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.087435 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 09:14:11.232490664 +0000 UTC Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158687 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.158824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.245827 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262204 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.262236 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.264297 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.271156 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.294972 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.312397 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.327144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.362969 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365917 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.365934 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.382589 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.396663 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.413777 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.440857 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.457869 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.476573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.508411 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":1,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 10s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.529617 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.552983 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.568197 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.580276 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.589177 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.611994 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.634669 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:24Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.683906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.786348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.889689 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.931438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:24 crc kubenswrapper[4725]: E0120 11:05:24.931592 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:24 crc kubenswrapper[4725]: I0120 11:05:24.992984 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:24Z","lastTransitionTime":"2026-01-20T11:05:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.088482 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-29 14:04:33.537960525 +0000 UTC Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.096964 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.200376 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.303930 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.407411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.509792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.613208 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716064 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.716132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.818906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.818971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.818988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.819009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.819026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.859973 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.860309 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:05:57.860273377 +0000 UTC m=+86.068595390 (durationBeforeRetry 32s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.921883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:25Z","lastTransitionTime":"2026-01-20T11:05:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.931536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.931570 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:25 crc kubenswrapper[4725]: I0120 11:05:25.931551 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.931657 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.931803 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:25 crc kubenswrapper[4725]: E0120 11:05:25.931894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.025595 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.088699 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-07 14:29:36.739793704 +0000 UTC Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135424 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.135435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.162929 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.163007 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.163048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163226 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163336 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.163313075 +0000 UTC m=+86.371635078 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163366 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163382 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163443 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163469 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163408 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163518 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163568 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.163535351 +0000 UTC m=+86.371857404 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.163610 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.163592443 +0000 UTC m=+86.371914606 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.238891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.238950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.238974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.239007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.239031 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.342953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.343143 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.446987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.447009 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.465968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.466196 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.466303 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:58.466275561 +0000 UTC m=+86.674597564 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.550515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.566760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.567002 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.567177 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:05:42.567135256 +0000 UTC m=+70.775457329 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653523 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.653652 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.756580 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.860938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.861191 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.931454 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:26 crc kubenswrapper[4725]: E0120 11:05:26.931612 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.963700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:26 crc kubenswrapper[4725]: I0120 11:05:26.964481 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:26Z","lastTransitionTime":"2026-01-20T11:05:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067510 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.067606 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.089308 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 12:44:42.589106877 +0000 UTC Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.170797 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232530 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232588 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.232630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.250853 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257262 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.257312 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.278907 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284068 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.284132 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.301954 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.307824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.324373 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.329758 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.352795 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:27Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.353041 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354787 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.354804 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.458544 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.561409 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.663919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.663970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.663982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.664019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.664031 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.766588 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.868891 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.931752 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.931859 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.931779 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.931908 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.932199 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:27 crc kubenswrapper[4725]: E0120 11:05:27.932169 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.933012 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:27 crc kubenswrapper[4725]: I0120 11:05:27.971831 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:27Z","lastTransitionTime":"2026-01-20T11:05:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.074710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.075074 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.089543 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 02:00:43.886607422 +0000 UTC Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.177653 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281450 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.281491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.383993 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.486714 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.589543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.691936 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.795301 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.855229 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.857872 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.858485 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.877136 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.897962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.898049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:28Z","lastTransitionTime":"2026-01-20T11:05:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.902097 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.915675 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.931979 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:28 crc kubenswrapper[4725]: E0120 11:05:28.932128 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.936531 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.948939 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.963613 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.976344 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:28 crc kubenswrapper[4725]: I0120 11:05:28.988186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:28Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012506 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.012548 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.020680 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.033287 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.049411 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.061213 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.076436 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.090339 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 23:26:47.049585 +0000 UTC Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.114662 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.185307 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.197893 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.214134 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.216742 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.226649 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.239213 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:29Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.318727 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421361 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421385 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.421398 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.524938 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.628372 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.731866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834525 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.834568 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.862069 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.862648 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/1.log" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.864808 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" exitCode=1 Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.864843 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a"} Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.864876 4725 scope.go:117] "RemoveContainer" containerID="213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.865685 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.865859 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.989828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.989994 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.990031 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.990212 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.990330 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:29 crc kubenswrapper[4725]: E0120 11:05:29.990459 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.991953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.991982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.991990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.992002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:29 crc kubenswrapper[4725]: I0120 11:05:29.992013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:29Z","lastTransitionTime":"2026-01-20T11:05:29Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.008074 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.032633 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.050420 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.070161 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://213b1d46dfc9f2aea3ec2cd9405a396f9c8c67155e76903665fb0dd7b18c74e3\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"message\\\":\\\"120 11:05:11.142731 6105 reflector.go:311] Stopping reflector *v1.EgressFirewall (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/egressfirewall/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.143046 6105 reflector.go:311] Stopping reflector *v1.Service (0s) from k8s.io/client-go/informers/factory.go:160\\\\nI0120 11:05:11.143213 6105 reflector.go:311] Stopping reflector *v1.ClusterUserDefinedNetwork (0s) from github.com/openshift/ovn-kubernetes/go-controller/pkg/crd/userdefinednetwork/v1/apis/informers/externalversions/factory.go:140\\\\nI0120 11:05:11.144063 6105 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:11.144108 6105 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:11.144130 6105 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:11.144164 6105 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:11.144173 6105 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:11.144190 6105 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:11.144200 6105 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:11.144207 6105 factory.go:656] Stopping watch factory\\\\nI0120 11:05:11.144221 6105 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:1\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:09Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.088194 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.090713 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 20:26:39.979201558 +0000 UTC Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.094435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.102774 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.118006 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.129560 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.141010 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.162043 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.176824 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.189637 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196629 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.196723 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.201595 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.213407 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.224944 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.239243 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.255336 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.269227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.299732 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.402893 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.506316 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.609241 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.712160 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815399 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.815411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.869566 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.874325 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:30 crc kubenswrapper[4725]: E0120 11:05:30.874593 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.891191 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.906492 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917645 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.917682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:30Z","lastTransitionTime":"2026-01-20T11:05:30Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.922145 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.931945 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:30 crc kubenswrapper[4725]: E0120 11:05:30.932061 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.940376 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.955233 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.971326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.985101 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:30 crc kubenswrapper[4725]: I0120 11:05:30.999034 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:30Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.020381 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.021978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.022044 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.035440 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.046796 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.061041 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.073931 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.087179 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.091227 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-12 08:33:43.923991293 +0000 UTC Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.104301 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.125162 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.126461 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.143154 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.163867 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:31Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.226824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.329399 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431905 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.431944 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.534597 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637726 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.637792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.740901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.740984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.741001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.741032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.741051 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.844906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.932356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.932368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:31 crc kubenswrapper[4725]: E0120 11:05:31.932558 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:31 crc kubenswrapper[4725]: E0120 11:05:31.932734 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.932383 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:31 crc kubenswrapper[4725]: E0120 11:05:31.933411 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947772 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:31 crc kubenswrapper[4725]: I0120 11:05:31.947846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:31Z","lastTransitionTime":"2026-01-20T11:05:31Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.088983 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.089000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.092022 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-11 03:41:23.139576104 +0000 UTC Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.191196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.293604 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396451 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396465 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.396475 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.499522 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.602564 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705094 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.705131 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807482 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807526 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.807567 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.909405 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:32Z","lastTransitionTime":"2026-01-20T11:05:32Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.931475 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:32 crc kubenswrapper[4725]: E0120 11:05:32.931633 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.946127 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.966678 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:32 crc kubenswrapper[4725]: I0120 11:05:32.984457 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.000115 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:32Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011000 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.011062 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.023693 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.043307 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.068326 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.083038 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.092583 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 04:15:01.687985455 +0000 UTC Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.097280 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.112297 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113647 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.113906 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.128029 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.146140 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.191530 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.207419 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.216616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.225364 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.238217 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.249637 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.260960 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:33Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.319957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.320057 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423653 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423680 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.423700 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.526916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527464 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.527774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.631056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.631693 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.631894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.632125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.632404 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735804 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735823 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.735833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839442 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.839649 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.931599 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.931706 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.931814 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:33 crc kubenswrapper[4725]: E0120 11:05:33.931829 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:33 crc kubenswrapper[4725]: E0120 11:05:33.931946 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:33 crc kubenswrapper[4725]: E0120 11:05:33.932101 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943115 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:33 crc kubenswrapper[4725]: I0120 11:05:33.943161 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:33Z","lastTransitionTime":"2026-01-20T11:05:33Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.046289 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.093095 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 11:44:57.069538618 +0000 UTC Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.148910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149118 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.149169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.252437 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.358408 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.460913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.564292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.690957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691013 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.691069 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.795563 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898534 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.898624 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:34Z","lastTransitionTime":"2026-01-20T11:05:34Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:34 crc kubenswrapper[4725]: I0120 11:05:34.931824 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:34 crc kubenswrapper[4725]: E0120 11:05:34.932144 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.002782 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.093997 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 13:17:05.167620867 +0000 UTC Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106508 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.106530 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.209267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312505 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.312543 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414753 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.414763 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.517366 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620391 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.620490 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.722724 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825751 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.825792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928522 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.928531 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:35Z","lastTransitionTime":"2026-01-20T11:05:35Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.931663 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.931747 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:35 crc kubenswrapper[4725]: E0120 11:05:35.931772 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:35 crc kubenswrapper[4725]: I0120 11:05:35.931663 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:35 crc kubenswrapper[4725]: E0120 11:05:35.931981 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:35 crc kubenswrapper[4725]: E0120 11:05:35.932113 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.031690 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.095334 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 05:16:01.78606173 +0000 UTC Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138419 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.138436 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241566 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.241615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.344998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.345015 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447785 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447880 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.447892 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.550260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.653353 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.756637 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859132 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.859220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.931964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:36 crc kubenswrapper[4725]: E0120 11:05:36.932231 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:36 crc kubenswrapper[4725]: I0120 11:05:36.961428 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:36Z","lastTransitionTime":"2026-01-20T11:05:36Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.063960 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.064043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.095610 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 03:48:21.173392198 +0000 UTC Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.166186 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.268825 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.371212 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.473219 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.496905 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.519542 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524848 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.524856 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.540359 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.545913 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.562813 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.566826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.582788 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586913 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.586998 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.601734 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:37Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.601908 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.603453 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.706688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809727 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.809820 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.913329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:37Z","lastTransitionTime":"2026-01-20T11:05:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.931298 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.931314 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:37 crc kubenswrapper[4725]: I0120 11:05:37.931339 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.931428 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.931534 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:37 crc kubenswrapper[4725]: E0120 11:05:37.931707 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015711 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015721 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.015747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.096568 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-05 12:58:35.602264125 +0000 UTC Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.117942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.117981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.117993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.118007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.118016 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.220859 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.322895 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323141 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.323347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.425109 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528225 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.528280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.631108 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.733312 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.835646 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.931363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:38 crc kubenswrapper[4725]: E0120 11:05:38.931554 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:38 crc kubenswrapper[4725]: I0120 11:05:38.937615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:38Z","lastTransitionTime":"2026-01-20T11:05:38Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.040541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.040900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.041043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.041192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.041277 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.096994 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 04:18:22.92627893 +0000 UTC Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.144502 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.246945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.246981 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.246991 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.247004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.247012 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349552 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.349640 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.453781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557208 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.557217 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.659925 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762377 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.762414 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865576 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.865677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.932117 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.932231 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:39 crc kubenswrapper[4725]: E0120 11:05:39.932278 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.932307 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:39 crc kubenswrapper[4725]: E0120 11:05:39.932444 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:39 crc kubenswrapper[4725]: E0120 11:05:39.932551 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.968989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:39 crc kubenswrapper[4725]: I0120 11:05:39.969101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:39Z","lastTransitionTime":"2026-01-20T11:05:39Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.070889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.071369 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.097310 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 20:25:19.100880966 +0000 UTC Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.174342 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.276770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277093 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.277403 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380244 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.380336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.482455 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.585250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.688169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.791269 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893748 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.893792 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.931573 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:40 crc kubenswrapper[4725]: E0120 11:05:40.931825 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.996936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:40 crc kubenswrapper[4725]: I0120 11:05:40.997044 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:40Z","lastTransitionTime":"2026-01-20T11:05:40Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.097655 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-14 13:07:08.980036686 +0000 UTC Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.099616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.202769 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.305478 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408877 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.408929 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.530937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633334 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.633380 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736448 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.736525 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.838986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.839112 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.931584 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.931668 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.931598 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:41 crc kubenswrapper[4725]: E0120 11:05:41.931728 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:41 crc kubenswrapper[4725]: E0120 11:05:41.931807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:41 crc kubenswrapper[4725]: E0120 11:05:41.932294 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941893 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:41 crc kubenswrapper[4725]: I0120 11:05:41.941968 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:41Z","lastTransitionTime":"2026-01-20T11:05:41Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.044661 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.098776 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 02:43:45.175063546 +0000 UTC Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148301 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.148495 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253400 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253433 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.253451 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.356241 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459057 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459072 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.459101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561366 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.561378 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.573821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:42 crc kubenswrapper[4725]: E0120 11:05:42.574115 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:42 crc kubenswrapper[4725]: E0120 11:05:42.574218 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:06:14.574193616 +0000 UTC m=+102.782515589 (durationBeforeRetry 32s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664099 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.664184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766742 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.766844 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.869141 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.932200 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:42 crc kubenswrapper[4725]: E0120 11:05:42.932519 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.967443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:42Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973738 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.973797 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:42Z","lastTransitionTime":"2026-01-20T11:05:42Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:42 crc kubenswrapper[4725]: I0120 11:05:42.989285 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:42Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.019020 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.033547 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.048958 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.064097 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.076995 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.077006 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.084129 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.099971 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-17 22:42:21.123324199 +0000 UTC Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.100840 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.119683 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.137272 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.157604 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180886 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180928 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180941 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.180979 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.183583 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.201424 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.217048 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.232675 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.250351 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.266758 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.280960 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:43Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.283943 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.283989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.284007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.284031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.284048 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.386825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.387496 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489953 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.489977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.490033 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.596397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699432 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699484 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.699511 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802440 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802452 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802471 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.802482 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908326 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.908442 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:43Z","lastTransitionTime":"2026-01-20T11:05:43Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.931563 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.931568 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:43 crc kubenswrapper[4725]: I0120 11:05:43.931614 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:43 crc kubenswrapper[4725]: E0120 11:05:43.931703 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:43 crc kubenswrapper[4725]: E0120 11:05:43.931860 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:43 crc kubenswrapper[4725]: E0120 11:05:43.932009 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.011186 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.100962 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-27 02:22:34.455650511 +0000 UTC Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.114249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.216313 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319292 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.319383 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.421866 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524695 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524715 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.524742 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628091 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628180 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.628232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.730813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833579 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.833680 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.932071 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:44 crc kubenswrapper[4725]: E0120 11:05:44.932218 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:44 crc kubenswrapper[4725]: I0120 11:05:44.936260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:44Z","lastTransitionTime":"2026-01-20T11:05:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038266 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.038336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.102095 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-14 15:20:26.49045766 +0000 UTC Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.141522 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.244842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.347310 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.449720 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.552681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.553388 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656317 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.656359 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.758842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861853 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.861876 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.931851 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.932430 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.931875 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.932509 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.932638 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.932836 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.931851 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:45 crc kubenswrapper[4725]: E0120 11:05:45.933343 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:45 crc kubenswrapper[4725]: I0120 11:05:45.965169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:45Z","lastTransitionTime":"2026-01-20T11:05:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066722 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066773 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.066796 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.102216 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-18 02:44:16.345655341 +0000 UTC Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.169875 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.272996 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.273249 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375497 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.375638 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478397 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.478424 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.580920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.580986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.581007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.581033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.581053 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683736 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.683771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.786738 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889557 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.889566 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.931434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:46 crc kubenswrapper[4725]: E0120 11:05:46.931681 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.960672 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/0.log" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.960735 4725 generic.go:334] "Generic (PLEG): container finished" podID="627f7c97-4173-413f-a90e-e2c5e058c53b" containerID="60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad" exitCode=1 Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.960770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerDied","Data":"60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad"} Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.961231 4725 scope.go:117] "RemoveContainer" containerID="60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.981000 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:46Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992054 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992121 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:46 crc kubenswrapper[4725]: I0120 11:05:46.992138 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:46Z","lastTransitionTime":"2026-01-20T11:05:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.010494 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.027936 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.045600 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.067421 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.083144 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.094994 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095040 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095051 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095078 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095101 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.095961 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.102859 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 21:32:41.36578562 +0000 UTC Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.115300 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.140875 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.158744 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.184311 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.197973 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.200666 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.214385 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.229356 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.242616 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.255981 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.273263 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.286891 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:46Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.303774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.407584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509808 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509847 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.509873 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612041 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.612071 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714619 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.714658 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816851 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.816877 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.883336 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.905350 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.909541 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.925859 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930179 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.930230 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.941697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.941751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.941860 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.942091 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.941805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.942415 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.946069 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950294 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.950306 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.966508 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/0.log" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.966567 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6"} Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.966407 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.971340 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.982442 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.986561 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:47Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:47 crc kubenswrapper[4725]: E0120 11:05:47.986679 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988490 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988544 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:47 crc kubenswrapper[4725]: I0120 11:05:47.988557 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:47Z","lastTransitionTime":"2026-01-20T11:05:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.004749 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.022986 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.044158 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.058203 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.069629 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.079809 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.090796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.091219 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.096484 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.104032 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 16:45:58.347544682 +0000 UTC Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.109227 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.122387 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.133918 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.150382 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.161888 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.174289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.187664 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.199970 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.215299 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.228401 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:48Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.406443 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509724 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.509799 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612437 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612463 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.612484 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719320 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719349 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.719361 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.823629 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.926677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:48Z","lastTransitionTime":"2026-01-20T11:05:48Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:48 crc kubenswrapper[4725]: I0120 11:05:48.931903 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:48 crc kubenswrapper[4725]: E0120 11:05:48.932070 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028858 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.028872 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.105136 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-31 04:06:11.693582473 +0000 UTC Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132369 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.132445 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.235465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.338752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.339457 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442873 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.442917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.546849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.650431 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.753658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754406 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754438 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.754458 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856481 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.856491 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.931765 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.931825 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:49 crc kubenswrapper[4725]: E0120 11:05:49.931896 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.931765 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:49 crc kubenswrapper[4725]: E0120 11:05:49.932172 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:49 crc kubenswrapper[4725]: E0120 11:05:49.932253 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.980970 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:49 crc kubenswrapper[4725]: I0120 11:05:49.981045 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:49Z","lastTransitionTime":"2026-01-20T11:05:49Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.084842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.106191 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-09 12:41:41.894623393 +0000 UTC Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.188347 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.292232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395195 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.395268 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497599 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.497627 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.600686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.703313 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806496 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.806615 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.909971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910022 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.910050 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:50Z","lastTransitionTime":"2026-01-20T11:05:50Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:50 crc kubenswrapper[4725]: I0120 11:05:50.931893 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:50 crc kubenswrapper[4725]: E0120 11:05:50.932119 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012235 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.012322 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.107229 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-11 05:33:46.671066248 +0000 UTC Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114835 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114849 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.114878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.217778 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.320936 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424197 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424214 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.424226 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.528938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.528978 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.528989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.529011 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.529021 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.632731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.633457 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.736184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.839989 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.931846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.931893 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.931901 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:51 crc kubenswrapper[4725]: E0120 11:05:51.932028 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:51 crc kubenswrapper[4725]: E0120 11:05:51.932228 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:51 crc kubenswrapper[4725]: E0120 11:05:51.932383 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943757 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943779 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:51 crc kubenswrapper[4725]: I0120 11:05:51.943813 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:51Z","lastTransitionTime":"2026-01-20T11:05:51Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.047801 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.107667 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-03 15:40:14.314038437 +0000 UTC Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.150875 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254101 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.254192 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.356635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459328 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.459417 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.594220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.696932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.696997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.697008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.697024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.697034 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800547 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.800586 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.904280 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:52Z","lastTransitionTime":"2026-01-20T11:05:52Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.932268 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:52 crc kubenswrapper[4725]: E0120 11:05:52.932411 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.947061 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.961681 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.972206 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:52 crc kubenswrapper[4725]: I0120 11:05:52.984503 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:52Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.002400 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.006434 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.015776 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.030933 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.049013 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.075992 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.091813 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.107891 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-13 06:22:33.647465803 +0000 UTC Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109008 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109073 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.109114 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.113878 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":2,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 20s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.130400 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.148186 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.164980 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.180463 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.195993 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212335 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212378 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212403 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212416 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.212392 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.229825 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:53Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314718 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314735 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.314747 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418598 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.418648 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521769 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521824 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521839 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.521849 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.624859 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726681 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726717 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.726753 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829145 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.829154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.931219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:53 crc kubenswrapper[4725]: E0120 11:05:53.931366 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.931504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.931504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:53 crc kubenswrapper[4725]: E0120 11:05:53.931671 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:53 crc kubenswrapper[4725]: E0120 11:05:53.931937 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932883 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.932963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:53 crc kubenswrapper[4725]: I0120 11:05:53.933030 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:53Z","lastTransitionTime":"2026-01-20T11:05:53Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.035955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036033 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.036046 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.108727 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 19:18:09.82347916 +0000 UTC Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140925 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.140937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244260 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.244383 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.347392 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450401 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450453 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.450461 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554651 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.554669 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657511 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.657633 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760468 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760524 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760539 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.760548 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.863506 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.932161 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:54 crc kubenswrapper[4725]: E0120 11:05:54.932427 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966664 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966676 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:54 crc kubenswrapper[4725]: I0120 11:05:54.966700 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:54Z","lastTransitionTime":"2026-01-20T11:05:54Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.075490 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.109169 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 15:22:07.189068616 +0000 UTC Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.177931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178429 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.178516 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.280930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.280976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.280993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.281010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.281023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383580 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.383780 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487371 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487455 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.487467 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.589666 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.692318 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795273 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795312 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.795328 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898683 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.898767 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:55Z","lastTransitionTime":"2026-01-20T11:05:55Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.931276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.931339 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:55 crc kubenswrapper[4725]: I0120 11:05:55.931276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:55 crc kubenswrapper[4725]: E0120 11:05:55.931434 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:55 crc kubenswrapper[4725]: E0120 11:05:55.931522 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:55 crc kubenswrapper[4725]: E0120 11:05:55.931810 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.035795 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.109468 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-22 20:49:52.033780129 +0000 UTC Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137686 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137778 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137828 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.137847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241459 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241491 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.241516 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345236 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.345373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449276 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.449398 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552870 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.552880 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.655890 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758394 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758430 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758454 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.758470 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.860734 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.932368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:56 crc kubenswrapper[4725]: E0120 11:05:56.932607 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963780 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:56 crc kubenswrapper[4725]: I0120 11:05:56.963811 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:56Z","lastTransitionTime":"2026-01-20T11:05:56Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066066 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066189 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.066235 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.110478 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-18 05:34:39.196391356 +0000 UTC Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169314 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169324 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169340 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.169351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.271660 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374866 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374879 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.374917 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477938 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.477982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.478000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.580441 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.682912 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784969 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.784999 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.868763 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.869037 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:01.86900019 +0000 UTC m=+150.077322173 (durationBeforeRetry 1m4s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.887921 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.887990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.888006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.888029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.888043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932129 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932144 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.932603 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.932803 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:57 crc kubenswrapper[4725]: E0120 11:05:57.932871 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.932901 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992612 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992632 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992660 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:57 crc kubenswrapper[4725]: I0120 11:05:57.992682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:57Z","lastTransitionTime":"2026-01-20T11:05:57Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.095805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096388 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096412 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.096440 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.111008 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-15 19:57:21.475067378 +0000 UTC Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148389 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.148411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.170215 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.172364 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.172433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.172471 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172568 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172599 4725 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172615 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172636 4725 projected.go:194] Error preparing data for projected volume kube-api-access-cqllr for pod openshift-network-diagnostics/network-check-target-xd92c: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172687 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.172665972 +0000 UTC m=+150.380987965 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172713 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr podName:3b6479f0-333b-4a96-9adf-2099afdc2447 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.172702283 +0000 UTC m=+150.381024266 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-cqllr" (UniqueName: "kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr") pod "network-check-target-xd92c" (UID: "3b6479f0-333b-4a96-9adf-2099afdc2447") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172715 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172760 4725 projected.go:288] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172777 4725 projected.go:194] Error preparing data for projected volume kube-api-access-s2dwl for pod openshift-network-diagnostics/network-check-source-55646444c4-trplf: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.172855 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl podName:9d751cbb-f2e2-430d-9754-c882a5e924a5 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.172830157 +0000 UTC m=+150.381152140 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "kube-api-access-s2dwl" (UniqueName: "kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl") pod "network-check-source-55646444c4-trplf" (UID: "9d751cbb-f2e2-430d-9754-c882a5e924a5") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175574 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175587 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175604 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.175619 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.189228 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.193942 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.193976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.193990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.194007 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.194019 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.209781 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.213959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214352 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.214499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.233554 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.238746 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.258544 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:58Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:58Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.258887 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260375 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260507 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.260752 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.364816 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365582 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365763 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.365939 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.468613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.468890 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.468964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.469045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.469130 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.475115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.475284 4725 secret.go:188] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.475343 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert podName:5fe485a1-e14f-4c09-b5b9-f252bc42b7e8 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.475330609 +0000 UTC m=+150.683652572 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert") pod "networking-console-plugin-85b44fc459-gdk6g" (UID: "5fe485a1-e14f-4c09-b5b9-f252bc42b7e8") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571915 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571951 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571975 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.571991 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.675898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676239 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676278 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.676292 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.778973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779067 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.779169 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881667 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881760 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.881839 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.932218 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:05:58 crc kubenswrapper[4725]: E0120 11:05:58.932460 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985532 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985581 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985597 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:58 crc kubenswrapper[4725]: I0120 11:05:58.985607 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:58Z","lastTransitionTime":"2026-01-20T11:05:58Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.043961 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.046057 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.046503 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.063791 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.077655 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088136 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088171 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088185 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.088194 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.091798 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.106826 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.111717 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-19 04:43:51.070777852 +0000 UTC Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.129311 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.145004 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.167184 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.184631 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.190329 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.198709 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.210945 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.223029 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.233684 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.246196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.261550 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.275482 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293056 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293172 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.293197 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.294270 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.313521 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.324255 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396203 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396238 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.396250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.531947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.531990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.532003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.532019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.532031 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634194 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634218 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.634270 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737030 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737149 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737210 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737249 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.737287 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.932000 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:05:59 crc kubenswrapper[4725]: E0120 11:05:59.933413 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.932537 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:05:59 crc kubenswrapper[4725]: E0120 11:05:59.933664 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.932486 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:05:59 crc kubenswrapper[4725]: E0120 11:05:59.933877 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.974985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.975503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.975920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.976253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.976508 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:05:59Z","lastTransitionTime":"2026-01-20T11:05:59Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:05:59 crc kubenswrapper[4725]: I0120 11:05:59.990379 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc"] Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.079069 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.112558 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-08 21:06:44.64007413 +0000 UTC Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182100 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.182260 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285346 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.285374 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.387998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388059 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.388197 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492286 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492342 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.492365 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596625 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596666 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.596719 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.700435 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803844 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.803862 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906165 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.906196 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:00Z","lastTransitionTime":"2026-01-20T11:06:00Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:00 crc kubenswrapper[4725]: I0120 11:06:00.932303 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:00 crc kubenswrapper[4725]: E0120 11:06:00.932425 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008793 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.008863 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.054264 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.056533 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/2.log" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.059995 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" exitCode=1 Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.060034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.060102 4725 scope.go:117] "RemoveContainer" containerID="0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.061338 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.064204 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.079212 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.093954 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.108069 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111891 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111909 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.111949 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.113473 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 11:50:28.334649357 +0000 UTC Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.123188 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.135865 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.151138 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.167573 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.181349 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.194785 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.206452 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215633 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.215686 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.220643 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.234534 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.245246 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.256219 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.274408 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.293003 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.312059 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317904 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317918 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.317930 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.326504 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.348985 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:01Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420285 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420373 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.420502 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523817 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.523843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.626929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.626989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.627015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.627045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.627056 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730568 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.730595 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833611 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.833635 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.931637 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.931682 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.931707 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.931813 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.931967 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:01 crc kubenswrapper[4725]: E0120 11:06:01.932165 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936864 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936894 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936903 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936916 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:01 crc kubenswrapper[4725]: I0120 11:06:01.936925 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:01Z","lastTransitionTime":"2026-01-20T11:06:01Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.039972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.040176 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.065111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.113859 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-26 01:31:13.234204942 +0000 UTC Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.143233 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.245865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.245948 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.245973 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.246003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.246026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350177 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350207 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.350231 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453306 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.453351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556255 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.556354 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660609 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.660654 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.763871 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.866795 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.932473 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:02 crc kubenswrapper[4725]: E0120 11:06:02.932678 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.950875 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.967120 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.969149 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:02Z","lastTransitionTime":"2026-01-20T11:06:02Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:02 crc kubenswrapper[4725]: I0120 11:06:02.985720 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.001770 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:02Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.018443 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.039476 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.053915 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.065216 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.072781 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.088093 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://0df14056d49b67ac1748d91fe783b4325ea3fdf0bf12e15fb57677446235308a\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:29Z\\\",\\\"message\\\":\\\"20 11:05:29.267425 6319 handler.go:190] Sending *v1.Node event handler 7 for removal\\\\nI0120 11:05:29.267458 6319 handler.go:208] Removed *v1.Node event handler 7\\\\nI0120 11:05:29.267476 6319 handler.go:208] Removed *v1.Node event handler 2\\\\nI0120 11:05:29.268196 6319 handler.go:190] Sending *v1.EgressIP event handler 8 for removal\\\\nI0120 11:05:29.268265 6319 handler.go:190] Sending *v1.Namespace event handler 1 for removal\\\\nI0120 11:05:29.268284 6319 handler.go:190] Sending *v1.Namespace event handler 5 for removal\\\\nI0120 11:05:29.268333 6319 handler.go:190] Sending *v1.Pod event handler 3 for removal\\\\nI0120 11:05:29.268348 6319 handler.go:190] Sending *v1.Pod event handler 6 for removal\\\\nI0120 11:05:29.268385 6319 handler.go:208] Removed *v1.Pod event handler 6\\\\nI0120 11:05:29.268403 6319 handler.go:208] Removed *v1.EgressIP event handler 8\\\\nI0120 11:05:29.268419 6319 handler.go:208] Removed *v1.Namespace event handler 1\\\\nI0120 11:05:29.268431 6319 handler.go:208] Removed *v1.Namespace event handler 5\\\\nI0120 11:05:29.268442 6319 handler.go:208] Removed *v1.Pod event handler 3\\\\nI0120 11:05:29.269138 6319 handler.go:190] Sending *v1.EgressFirewall event handler 9 for removal\\\\nI0120 11:05:29.269231 6319 factory.go:656] Stopping watch factory\\\\nI0120 11:05:29.269255 6319 ovnkube.go:599] Stopped ovnkube\\\\nI0120 11:05:2\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:28Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.106647 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.114131 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 16:44:23.628409275 +0000 UTC Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.136354 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.160223 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.173174 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.175902 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.175968 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.175990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.176034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.176054 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.191248 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.206538 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.221756 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.234543 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.251166 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.272408 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:03Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.278901 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.278980 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.279003 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.279034 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.279056 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381618 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.381630 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484946 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.484988 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587259 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.587363 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689634 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.689750 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.791933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.792068 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894786 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.894833 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.931843 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.931889 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.931916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:03 crc kubenswrapper[4725]: E0120 11:06:03.932000 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:03 crc kubenswrapper[4725]: E0120 11:06:03.932113 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:03 crc kubenswrapper[4725]: E0120 11:06:03.932303 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997253 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997354 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:03 crc kubenswrapper[4725]: I0120 11:06:03.997371 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:03Z","lastTransitionTime":"2026-01-20T11:06:03Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099881 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099924 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.099943 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.114734 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-24 16:37:15.02669279 +0000 UTC Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202251 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202263 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.202333 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305191 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305271 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305302 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.305331 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408303 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408350 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.408395 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512426 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512617 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.512709 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616065 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616119 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616153 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.616175 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.720504 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823341 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823404 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823422 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.823436 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927131 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927164 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.927177 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:04Z","lastTransitionTime":"2026-01-20T11:06:04Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:04 crc kubenswrapper[4725]: I0120 11:06:04.931585 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:04 crc kubenswrapper[4725]: E0120 11:06:04.931724 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030226 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.030285 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.115805 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-23 03:42:08.614187768 +0000 UTC Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132961 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.132993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.133057 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235815 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.235824 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339428 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.339493 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442252 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.442281 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544855 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.544879 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.647919 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.647977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.647990 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.648021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.648035 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.749949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.749989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.749997 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.750012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.750023 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.853122 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.931843 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.931887 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.931938 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:05 crc kubenswrapper[4725]: E0120 11:06:05.932067 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:05 crc kubenswrapper[4725]: E0120 11:06:05.932219 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:05 crc kubenswrapper[4725]: E0120 11:06:05.932333 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956014 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956063 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956090 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:05 crc kubenswrapper[4725]: I0120 11:06:05.956100 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:05Z","lastTransitionTime":"2026-01-20T11:06:05Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059053 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.059223 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.116602 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-26 19:32:21.820955413 +0000 UTC Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162055 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162127 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162143 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.162153 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264745 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.264789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368170 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368242 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368268 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.368286 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470742 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.470838 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573862 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.573895 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677436 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.677579 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.780772 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884624 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884708 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884764 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.884786 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.931690 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:06 crc kubenswrapper[4725]: E0120 11:06:06.931963 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987631 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987729 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:06 crc kubenswrapper[4725]: I0120 11:06:06.987743 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:06Z","lastTransitionTime":"2026-01-20T11:06:06Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090829 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.090843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.117667 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-09 04:34:03.111493072 +0000 UTC Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193509 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193595 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.193610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296487 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.296528 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400309 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.400384 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503551 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503608 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503620 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503635 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.503645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606563 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606630 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606650 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.606694 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.708875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709004 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.709151 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812538 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.812566 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914939 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914949 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.914972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:07Z","lastTransitionTime":"2026-01-20T11:06:07Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.931509 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.931596 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:07 crc kubenswrapper[4725]: I0120 11:06:07.931651 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:07 crc kubenswrapper[4725]: E0120 11:06:07.931739 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:07 crc kubenswrapper[4725]: E0120 11:06:07.931869 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:07 crc kubenswrapper[4725]: E0120 11:06:07.932051 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017423 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017473 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.017515 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.117809 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-22 03:38:40.380298124 +0000 UTC Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119734 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119790 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119830 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.119842 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223009 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223060 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.223178 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.327988 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.328043 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431307 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431360 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.431381 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534230 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.534406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546657 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.546688 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.564745 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569095 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.569117 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.585609 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593514 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593558 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.593610 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.645177 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649706 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.649771 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.680461 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684956 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684982 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.684992 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.697904 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:08Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:08Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.698027 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699521 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699529 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.699550 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802112 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802154 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.802168 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.905375 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:08Z","lastTransitionTime":"2026-01-20T11:06:08Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:08 crc kubenswrapper[4725]: I0120 11:06:08.932111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:08 crc kubenswrapper[4725]: E0120 11:06:08.932283 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008463 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008513 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008531 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008553 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.008569 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110316 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110356 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110384 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.110394 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.118526 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-01 20:41:11.746731945 +0000 UTC Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212749 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212796 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212807 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.212846 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314678 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.314700 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418215 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.418238 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521347 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521536 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521562 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.521605 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624622 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624648 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.624666 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727237 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727280 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727297 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.727307 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830719 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.830733 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.931556 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.931624 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.931559 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:09 crc kubenswrapper[4725]: E0120 11:06:09.931783 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:09 crc kubenswrapper[4725]: E0120 11:06:09.931918 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:09 crc kubenswrapper[4725]: E0120 11:06:09.932007 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933705 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933728 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:09 crc kubenswrapper[4725]: I0120 11:06:09.933749 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:09Z","lastTransitionTime":"2026-01-20T11:06:09Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.037332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.118888 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-05 12:03:46.249751403 +0000 UTC Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139061 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139162 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.139175 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242586 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.242596 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344822 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344840 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.344853 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447527 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447542 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.447553 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549897 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549955 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.549988 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652475 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.652498 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755113 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755163 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.755204 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857221 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857261 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857284 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.857293 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.931480 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:10 crc kubenswrapper[4725]: E0120 11:06:10.931669 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960878 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960921 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:10 crc kubenswrapper[4725]: I0120 11:06:10.960959 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:10Z","lastTransitionTime":"2026-01-20T11:06:10Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062770 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.062789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.119168 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 22:02:03.054434706 +0000 UTC Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165652 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165755 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165775 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.165817 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268282 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268329 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268382 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.268393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372176 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372318 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372339 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.372385 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475421 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.475486 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578231 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.578266 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680673 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680781 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680794 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.680847 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784024 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784133 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784159 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.784209 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886892 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886962 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.886972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.932495 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.932802 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.932934 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.933064 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.933342 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.933519 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.933807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:11 crc kubenswrapper[4725]: E0120 11:06:11.934019 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.946577 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.967961 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989930 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989971 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.989984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.990013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:11Z","lastTransitionTime":"2026-01-20T11:06:11Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:11 crc kubenswrapper[4725]: I0120 11:06:11.990487 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:11Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.004549 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.018998 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.034157 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.047139 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.058289 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.072207 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092752 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.092790 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.096513 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.110959 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.119800 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-08 18:04:53.132519994 +0000 UTC Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.140861 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.157375 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.174275 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.190196 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195585 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195663 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.195683 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.205412 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.217978 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.232623 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.250679 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298703 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298821 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.298860 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401515 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401554 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.401592 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504899 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504936 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.504951 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608174 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608234 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608250 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608272 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.608286 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712411 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712575 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712594 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.712608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814601 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.814626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.921806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922139 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922243 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.922433 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:12Z","lastTransitionTime":"2026-01-20T11:06:12Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.931430 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:12 crc kubenswrapper[4725]: E0120 11:06:12.931770 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.946815 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.967218 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:12 crc kubenswrapper[4725]: I0120 11:06:12.982545 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.000749 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:12Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.017834 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024498 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024570 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.024608 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.033350 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.047564 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.058323 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.072431 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.085241 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.096280 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.106166 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.118849 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.120990 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-07 13:35:51.131156284 +0000 UTC Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126869 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.126937 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.129441 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.139988 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.151662 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.173766 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.187920 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.212553 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:13Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229603 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229646 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229672 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.229684 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332111 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332148 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.332188 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433788 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.433882 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536572 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536623 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.536645 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.639584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742458 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742478 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742504 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.742523 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844675 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.844742 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.932016 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:13 crc kubenswrapper[4725]: E0120 11:06:13.932486 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.932340 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:13 crc kubenswrapper[4725]: E0120 11:06:13.932718 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.932301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:13 crc kubenswrapper[4725]: E0120 11:06:13.932898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948410 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:13 crc kubenswrapper[4725]: I0120 11:06:13.948423 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:13Z","lastTransitionTime":"2026-01-20T11:06:13Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.051456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.051836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.051958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.052158 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.052288 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.121901 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-01 00:08:42.10928996 +0000 UTC Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154812 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154931 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154958 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.154972 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258184 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258777 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.258864 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.361976 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362010 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362036 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.362047 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.464604 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.567406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.641779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:14 crc kubenswrapper[4725]: E0120 11:06:14.641965 4725 secret.go:188] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:06:14 crc kubenswrapper[4725]: E0120 11:06:14.642020 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs podName:a5d55efc-e85a-4a02-a4ce-7355df9fea66 nodeName:}" failed. No retries permitted until 2026-01-20 11:07:18.642007864 +0000 UTC m=+166.850329827 (durationBeforeRetry 1m4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs") pod "network-metrics-daemon-5lfc4" (UID: "a5d55efc-e85a-4a02-a4ce-7355df9fea66") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669516 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.669616 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772332 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772381 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772396 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.772406 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875031 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875082 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.875134 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.931276 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:14 crc kubenswrapper[4725]: E0120 11:06:14.931558 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978408 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978435 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978444 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:14 crc kubenswrapper[4725]: I0120 11:06:14.978465 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:14Z","lastTransitionTime":"2026-01-20T11:06:14Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080479 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.080511 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.122770 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-21 22:08:11.368230692 +0000 UTC Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182640 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182702 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.182726 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286290 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286353 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286370 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286393 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.286411 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389809 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.389883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.492699 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.492944 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.492985 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.493015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.493040 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595723 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595774 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595789 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.595826 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.698910 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699310 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699445 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.699527 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803020 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803110 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803129 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.803140 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905446 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905503 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.905533 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:15Z","lastTransitionTime":"2026-01-20T11:06:15Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.931995 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.932074 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:15 crc kubenswrapper[4725]: E0120 11:06:15.932208 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:15 crc kubenswrapper[4725]: E0120 11:06:15.932326 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:15 crc kubenswrapper[4725]: I0120 11:06:15.932536 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:15 crc kubenswrapper[4725]: E0120 11:06:15.932708 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.008987 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009076 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009155 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.009172 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111610 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111701 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111732 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.111755 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.123966 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 09:13:07.496403791 +0000 UTC Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.215331 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318305 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318358 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318368 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318386 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.318397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422021 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422173 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.422184 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566713 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566820 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566836 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566857 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.566868 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669217 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669275 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669291 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.669300 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772528 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772583 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.772593 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875048 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875151 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875211 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.875232 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.931984 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:16 crc kubenswrapper[4725]: E0120 11:06:16.932189 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978801 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978871 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978885 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:16 crc kubenswrapper[4725]: I0120 11:06:16.978921 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:16Z","lastTransitionTime":"2026-01-20T11:06:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081742 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.081768 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.124872 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-30 14:14:06.466007128 +0000 UTC Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184128 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184319 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.184351 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287300 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287336 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.287373 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390825 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390912 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390945 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.390980 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493477 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493543 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493556 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493577 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.493591 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597147 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597190 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.597233 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.699940 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700047 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700071 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.700105 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802733 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802761 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.802774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905137 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905188 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905216 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.905229 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:17Z","lastTransitionTime":"2026-01-20T11:06:17Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.931546 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:17 crc kubenswrapper[4725]: E0120 11:06:17.931668 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.931731 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:17 crc kubenswrapper[4725]: I0120 11:06:17.931549 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:17 crc kubenswrapper[4725]: E0120 11:06:17.932053 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:17 crc kubenswrapper[4725]: E0120 11:06:17.932125 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007450 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007737 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007831 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.007922 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.008004 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110731 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110762 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.110774 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.125001 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-28 08:12:49.780898846 +0000 UTC Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213441 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213476 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213486 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213499 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.213509 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316392 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316434 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316451 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316467 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.316478 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419142 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419222 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419248 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.419267 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522192 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522288 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.522305 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625537 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625548 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.625576 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728084 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728140 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.728150 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732120 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732178 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732196 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732212 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.732223 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.747456 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.751972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752006 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752018 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752032 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.752041 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.771193 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777125 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777205 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.777250 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.798745 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805257 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805440 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.805494 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.822875 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828245 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.828289 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.845476 4725 kubelet_node_status.go:585] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32404556Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32865356Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-20T11:06:18Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b9ea248f8ca33258fe1683da51d2b16b94630be1b361c65f68a16c1a34b94887\\\"],\\\"sizeBytes\\\":2887430265},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:4a62fa1c0091f6d94e8fb7258470b9a532d78364b6b51a05341592041d598562\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:8db792bab418e30d9b71b9e1ac330ad036025257abbd2cd32f318ed14f70d6ac\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1523204510},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\"],\\\"sizeBytes\\\":1498102846},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\"],\\\"sizeBytes\\\":1232839934},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:8ff55cdb2367f5011074d2f5ebdc153b8885e7495e14ae00f99d2b7ab3584ade\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:d656c1453f2261d9b800f5c69fba3bc2ffdb388414c4c0e89fcbaa067d7614c4\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1151049424},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:1d7d4739b2001bd173f2632d5f73724a5034237ee2d93a02a21bbfff547002ba\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:7688bce5eb0d153adff87fc9f7a47642465c0b88208efb236880197969931b37\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"],\\\"sizeBytes\\\":1032059094},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:0878ac12c537fcfc617a539b3b8bd329ba568bb49c6e3bb47827b177c47ae669\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:1dc15c170ebf462dacaef75511740ed94ca1da210f3980f66d77f91ba201c875\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"],\\\"sizeBytes\\\":1001152198},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\"],\\\"sizeBytes\\\":964552795},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\"],\\\"sizeBytes\\\":947616130},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3cc3840d7a81ce1b420f06e07a923861faf37d9c10688aa3aa0b7b76c8706ad\\\"],\\\"sizeBytes\\\":907837715},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:101f295e2eae0755ae1865f7de885db1f17b9368e4120a713bb5f79e17ce8f93\\\"],\\\"sizeBytes\\\":854694423},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:47b0670fa1051335fd2d2c9e8361e4ed77c7760c33a2180b136f7c7f59863ec2\\\"],\\\"sizeBytes\\\":852490370},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:862f4a4bed52f372056b6d368e2498ebfb063075b31cf48dbdaaeedfcf0396cb\\\"],\\\"sizeBytes\\\":772592048},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\"],\\\"sizeBytes\\\":705793115},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\"],\\\"sizeBytes\\\":687915987},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f247257b0885cf5d303e3612c7714b33ae51404cfa2429822060c6c025eb17dd\\\"],\\\"sizeBytes\\\":668060419},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\"],\\\"sizeBytes\\\":613826183},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3e9dc0b02b9351edf7c46b1d46d724abd1ac38ecbd6bc541cee84a209258d8\\\"],\\\"sizeBytes\\\":581863411},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\"],\\\"sizeBytes\\\":574606365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ee8d8f089ec1488067444c7e276c4e47cc93840280f3b3295484d67af2232002\\\"],\\\"sizeBytes\\\":550676059},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:10f20a39f16ae3019c62261eda8beb9e4d8c36cbb7b500b3bae1312987f0685d\\\"],\\\"sizeBytes\\\":541458174},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\"],\\\"sizeBytes\\\":533092226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\"],\\\"sizeBytes\\\":528023732},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\"],\\\"sizeBytes\\\":510867594},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:0b6ae0d091d2bf49f9b3a3aff54aabdc49e70c783780f118789f49d8f95a9e03\\\"],\\\"sizeBytes\\\":510526836},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\"],\\\"sizeBytes\\\":507459597},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e9e7dd2b1a8394b7490ca6df8a3ee8cdfc6193ecc6fb6173ed9a1868116a207\\\"],\\\"sizeBytes\\\":505721947},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:094bb6a6641b4edbaf932f0551bcda20b0d4e012cbe84207348b24eeabd351e9\\\"],\\\"sizeBytes\\\":504778226},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c69fe7a98a744b7a7b61b2a8db81a338f373cd2b1d46c6d3f02864b30c37e46c\\\"],\\\"sizeBytes\\\":504735878},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e51e6f78ec20ef91c82e94a49f950e427e77894e582dcc406eec4df807ddd76e\\\"],\\\"sizeBytes\\\":502943148},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\"],\\\"sizeBytes\\\":501379880},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:3a741253807c962189819d879b8fef94a9452fb3f5f3969ec3207eb2d9862205\\\"],\\\"sizeBytes\\\":500472212},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\"],\\\"sizeBytes\\\":498888951},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5aa9e5379bfeb63f4e517fb45168eb6820138041641bbdfc6f4db6427032fa37\\\"],\\\"sizeBytes\\\":497832828},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\"],\\\"sizeBytes\\\":497742284},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:88b1f0a05a1b1c91e1212b40f0e7d04c9351ec9d34c52097bfdc5897b46f2f0e\\\"],\\\"sizeBytes\\\":497120598},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:737e9019a072c74321e0a909ca95481f5c545044dd4f151a34d0e1c8b9cf273f\\\"],\\\"sizeBytes\\\":488494681},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fe009d03910e18795e3bd60a3fd84938311d464d2730a2af5ded5b24e4d05a6b\\\"],\\\"sizeBytes\\\":487097366},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:66760a53b64d381940757ca9f0d05f523a61f943f8da03ce9791e5d05264a736\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:e97a0cb5b6119a9735efe0ac24630a8912fcad89a1dddfa76dc10edac4ec9815\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":485998616},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\"],\\\"sizeBytes\\\":485767738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:898cae57123c5006d397b24af21b0f24a0c42c9b0be5ee8251e1824711f65820\\\"],\\\"sizeBytes\\\":485535312},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1eda5ad6a6c5b9cd94b4b456e9116f4a0517241b614de1a99df14baee20c3e6a\\\"],\\\"sizeBytes\\\":479585218},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:487c0a8d5200bcdce484ab1169229d8fcb8e91a934be45afff7819c4f7612f57\\\"],\\\"sizeBytes\\\":476681373},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b641ed0d63034b23d07eb0b2cd455390e83b186e77375e2d3f37633c1ddb0495\\\"],\\\"sizeBytes\\\":473958144},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:32f9e10dfb8a7c812ea8b3e71a42bed9cef05305be18cc368b666df4643ba717\\\"],\\\"sizeBytes\\\":463179365},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8fdf28927b06a42ea8af3985d558c84d9efd142bb32d3892c4fa9f5e0d98133c\\\"],\\\"sizeBytes\\\":460774792},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dd0628f89ad843d82d5abfdc543ffab6a861a23cc3005909bd88fa7383b71113\\\"],\\\"sizeBytes\\\":459737917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\"],\\\"sizeBytes\\\":457588564},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:adabc3456bf4f799f893d792cdf9e8cbc735b070be346552bcc99f741b0a83aa\\\"],\\\"sizeBytes\\\":450637738},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:342dca43b5b09123737ccda5e41b4a5d564e54333d8ce04d867d3fb968600317\\\"],\\\"sizeBytes\\\":448887027}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"6eec783f-1471-434e-9e46-81d4bd7eabfe\\\",\\\"systemUUID\\\":\\\"38403e10-86da-4c2a-98da-84319c85ddeb\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:18Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.845695 4725 kubelet_node_status.go:572] "Unable to update node status" err="update node status exceeds retry count" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847756 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847776 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.847789 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.932059 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:18 crc kubenswrapper[4725]: E0120 11:06:18.932227 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949642 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949679 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949689 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949707 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:18 crc kubenswrapper[4725]: I0120 11:06:18.949724 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:18Z","lastTransitionTime":"2026-01-20T11:06:18Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052330 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052345 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052362 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.052374 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.126820 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-14 12:30:12.848956282 +0000 UTC Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154325 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154383 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154402 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154431 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.154451 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.256974 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257012 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257038 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.257049 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359798 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359850 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359861 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359876 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.359887 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463365 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463420 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.463441 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566418 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566502 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566519 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.566556 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669138 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669213 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669229 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.669242 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772337 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772405 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772427 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.772444 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875606 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875641 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.875652 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.931958 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.932028 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.932047 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:19 crc kubenswrapper[4725]: E0120 11:06:19.932236 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:19 crc kubenswrapper[4725]: E0120 11:06:19.932387 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:19 crc kubenswrapper[4725]: E0120 11:06:19.932591 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978806 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978875 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978900 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978932 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:19 crc kubenswrapper[4725]: I0120 11:06:19.978954 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:19Z","lastTransitionTime":"2026-01-20T11:06:19Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081750 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081810 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081834 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.081843 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.126970 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-29 01:26:10.26363544 +0000 UTC Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184746 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.184845 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287596 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.287679 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389474 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389520 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389550 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.389562 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492274 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492323 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.492349 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595045 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595492 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.595569 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.697966 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698062 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.698116 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.800462 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.800814 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.800934 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.801049 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.801189 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.903952 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.903992 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.904001 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.904015 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.904026 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:20Z","lastTransitionTime":"2026-01-20T11:06:20Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:20 crc kubenswrapper[4725]: I0120 11:06:20.931878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:20 crc kubenswrapper[4725]: E0120 11:06:20.932172 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007267 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007279 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007295 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.007305 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110668 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110813 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110838 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110868 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.110894 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.127597 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 13:48:06.219061337 +0000 UTC Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212720 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212759 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212783 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.212791 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314716 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314768 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314784 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.314794 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417744 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417799 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417811 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.417912 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520972 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.520989 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.521000 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624355 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624387 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624398 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624414 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.624425 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727637 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727684 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727697 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727710 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.727719 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830417 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830460 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830472 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830489 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.830499 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.931196 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.931244 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.931281 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:21 crc kubenswrapper[4725]: E0120 11:06:21.931402 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:21 crc kubenswrapper[4725]: E0120 11:06:21.931569 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:21 crc kubenswrapper[4725]: E0120 11:06:21.931769 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932818 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932898 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932911 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932927 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:21 crc kubenswrapper[4725]: I0120 11:06:21.932938 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:21Z","lastTransitionTime":"2026-01-20T11:06:21Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035488 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035555 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035573 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.035584 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.128494 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 06:56:31.878540487 +0000 UTC Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140193 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140246 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140287 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.140299 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242700 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242743 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242754 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242771 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.242784 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345126 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345160 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345169 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345183 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.345192 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447198 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447283 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447311 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447344 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.447365 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.550827 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551359 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551466 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.551551 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654023 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654072 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654103 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654117 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.654126 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756628 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756674 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756692 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.756703 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859914 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859933 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859964 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.859982 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.932134 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:22 crc kubenswrapper[4725]: E0120 11:06:22.932670 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.954932 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-z7f69" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:06Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://ae24afc3ce0c84e73f9bb47971b3c4b00b16894c2780a1bc4785995949114fe4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:06Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e40792096b162f0f9ce5f8362f51e5f8dea2c1ce4b1447235388416b5db7708c\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://38c355917de044af6aa2a577b7fd35e0a7fd9b355898ce661c4cded008fe16cf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:687fddfbb085a1688df312ce4ec8c857df9b2daed8ff4a7ed6163a1154afa2cc\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://a74fef6c77b483466590ae742ab2e761bf33014fa8997bdfd314eeb08da6b521\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:383f4cceeeaead203bb2327fdd367c64b64d729d7fa93089f249e496fcef0c78\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d67cceac372e68d2562957e356b34c76ae7bd62b84b10a66ed9b1175c90703cb\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:00Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:00Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:f567acb85146b5ed81451ec3e79f2de0c62e28c69b2eeade0abdf5d0c388e7aa\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5afdfd287fc9f0abf4ae3ebbb7062d75a08ba619643bdc56e93d8d35135f006c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:01Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://c073ae8b3fa3c47edcbd447b60b7d4e35de52af50de86b19819fdcbe2c375b64\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:03Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:98100674616e54319f6713d742fd0c3bdbc84e6e6173e8ccf4a2473a714c2bc4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://838b6d8bf72b14006a52d1040034c920f5a3674872bd58dceda9b1af81922906\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:05:05Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:05Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-q68t4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-z7f69\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962590 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962669 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.962696 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:22Z","lastTransitionTime":"2026-01-20T11:06:22Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.970817 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/multus-vchwb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"627f7c97-4173-413f-a90e-e2c5e058c53b\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:47Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7eeaee65f2808b819eedb413bdcabb9144e12f0dd97f13fd1afba93a95b67b26\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:05:45Z\\\",\\\"message\\\":\\\"2026-01-20T11:05:00+00:00 [cnibincopy] Successfully copied files in /usr/src/multus-cni/rhel9/bin/ to /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f\\\\n2026-01-20T11:05:00+00:00 [cnibincopy] Successfully moved files in /host/opt/cni/bin/upgrade_7206f67f-e00a-4c52-85e1-ba634c61119f to /host/opt/cni/bin/\\\\n2026-01-20T11:05:00Z [verbose] multus-daemon started\\\\n2026-01-20T11:05:00Z [verbose] Readiness Indicator file check\\\\n2026-01-20T11:05:45Z [error] have you checked that your default network is ready? still waiting for readinessindicatorfile @ /host/run/multus/cni/net.d/10-ovn-kubernetes.conf. pollimmediate error: timed out waiting for the condition\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:47Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-jbnsp\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-multus\"/\"multus-vchwb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:22 crc kubenswrapper[4725]: I0120 11:06:22.985511 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9ebd4678-64d7-4f2b-ba43-09d949f71a4d\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://2dfc922df674be94a81a0dadaece15a9bca98167e019a5d2250b2492dfdfe44f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c0f9da410c07372b6c9ad6a79379b491cd10fdee88051c026b084652d85aed21\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://d2031da08e3c31761494138c5f679fb84c53d1e8bfc2d988380a1f0808588b0d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://c85a0309041dccaf77b27caa91242cc5d4c071894d0184483f9bc2721a8065ee\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:8506ce0a578bc18fac117eb2b82799488ffac0bed08287faaf92edaf5d17ab95\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.001779 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"dd5489d4-628b-4079-b05c-06f8dcf74f1f\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9068777e4f01e848a82bf288d7a3392cdc91101ee3a01bb4a81df93821dd63a3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://7c38f828d227876595bd9deb8b2d85fb59815051b2e2bfa76a4708ffed580f1c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b35e6d9a60b46ffa8c3718ab0311f05ffc829ac99c3a07206ee12402ebad872a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5b881c97aa8e440c6b3ca001edfd789a9380066b8f11f35a8dd8d88c5c7dbf86\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://fd9da09edfaad6c39e1875fa23c733c75bec99af8ed08dab0325ea3703d06436\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:22Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.021696 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/network-operator-58b4c7f79c-55gtf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"37a5e44f-9a88-4405-be8a-b645485e7312\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://710f7f7517b106f65eb0165e6bb2abaec3ee7f207f878ad0ab8cf94cc43e397a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rdwmf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-58b4c7f79c-55gtf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.037639 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-4ln5h" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d75a4c96-2883-4a0b-bab2-0fab2b6c0b49\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://a091b7f2407c7a2a6677ce14ff27715427951b74b86261aa90754a030dae99be\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rczfb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-4ln5h\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.050238 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a4c10a0-687d-4b24-b1a9-5aba619c0668\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6438e81f87d7b39e355ab44ce94ed177a31c8ee616803df08c91dd6d5c23d46d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c915fb8ba96e911699a1ae34a8e95ca8a9fbe1bf8c28fea177225c63a8bdfc0a\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-47wsh\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-z2gv8\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.062643 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"0cdc8181-1be1-4da5-9049-40528ec5f9b7\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://8132f828bbc5741a0a318fe005b753dff87f06d263948c088483daca665ce86f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://6faa6708fa29d5eb071d0383feaa4d4ca6f04d52a3b5ad147a929daee25712aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064315 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064343 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064363 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064380 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.064393 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.076647 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3b6479f0-333b-4a96-9adf-2099afdc2447\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-cqllr\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-xd92c\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.089490 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae647598ec35cda5766806d3d44a91e3b9d4dee48ff154f3d8490165399873fd\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-85b44fc459-gdk6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.100356 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-dns/node-resolver-c9dck" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3acff9b-8c0b-4a8a-b81f-449be15f3aef\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:54Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://18dc127da7131ef4109e27c83b6a85d5d46e45d10ac5b884ec6043b7cab53c07\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:35512335ac39aed0f55b7f799f416f4f6445c20c1b19888cf2bb72bb276703f2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-szb2t\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:54Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-c9dck\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.109982 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-image-registry/node-ca-fv2jh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a3fffa1c-6d54-432d-9090-da67cd8ca2ee\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:58Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://50580c974842cc11f6f926835f5cc709e6bd830b25dca09bd6c1501a6907bd83\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9fa29d188c85a8b1e1bd15c9c18e96f1b235da9bd4a45dbc086a4a69520ed63f\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-rh4k2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:57Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-fv2jh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.123184 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9d751cbb-f2e2-430d-9754-c882a5e924a5\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e1baa38811c04bd8909e01a1f3be7421a1cb99d608d3dc4cf86d95b17de2ab8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2dwl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-55646444c4-trplf\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.129183 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-30 09:32:02.686930509 +0000 UTC Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.139369 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6de4324f-3428-4409-92a4-940e5b94fe12\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:09Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://cd1e6cdd5a3a83a486d49ef12b4b6156dc710c8c2d76db25a6f3dac813861405\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://94804e9db4cc8f6189b493bfd5399bbf7438b62f17a8c6a6521b1e0746303d6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:10Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-bfkbm\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:09Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-749d76644c-8ls4r\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.150049 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"a5d55efc-e85a-4a02-a4ce-7355df9fea66\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:10Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d98bb346a17feae024d92663df92b25c120938395ab7043afbed543c6db9ca8d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-lljhl\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:05:10Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-5lfc4\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.162995 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"bb0f33a8-410b-4912-82e7-7ef77344fd80\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:05:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-20T11:04:52Z\\\",\\\"message\\\":\\\"lling back to namespace): Get \\\\\\\"https://localhost:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\\\\\\\": net/http: TLS handshake timeout\\\\nI0120 11:04:45.795778 1 builder.go:304] check-endpoints version 4.18.0-202502101302.p0.g763313c.assembly.stream.el9-763313c-763313c860ea43fcfc9b1ac00ebae096b57c078e\\\\nI0120 11:04:45.796857 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-581973985/tls.crt::/tmp/serving-cert-581973985/tls.key\\\\\\\"\\\\nI0120 11:04:52.352731 1 requestheader_controller.go:247] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0120 11:04:52.356215 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0120 11:04:52.356240 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0120 11:04:52.356289 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0120 11:04:52.356299 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0120 11:04:52.360909 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0120 11:04:52.360939 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0120 11:04:52.360946 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0120 11:04:52.360954 1 genericapiserver.go:533] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0120 11:04:52.361845 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0120 11:04:52.361861 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0120 11:04:52.361865 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0120 11:04:52.361869 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0120 11:04:52.365537 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":true,\\\"restartCount\\\":1,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:53Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:06bc35825771aee1220d34720243b89c4ba8a8b335e6de2597126bd791fd90d4\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166613 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166659 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166670 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166685 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.166693 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.187043 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"e9da325a-9c67-4207-8d64-4a8cb0cdb1cb\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:53Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://69eeb751839efc5ebfd6b47f649fa5c614c869b87356aea87c21dfd4f5480240\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://b81cb2adf14250c941a89ca7cb2c2b503d5ce0c8627bd18f5e376017c2cf77d8\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2db6e917cc87a58decbe33c0bc308e35b13c818140a70098924d890994dce615\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://2cffb76dc5b6c5eac425aed7dc61ed0a87a07bbc05f3eee3e3425b285beda96c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a0fa3723269019bee1847b26702f42928e779036cc2f58408f8ee7866be30a93\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:37Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://9704bf7447628a927dfef8e8bcd3e63085f1c34aa6cb2b58e6e2ea3e9d33e4c7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:36Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://be8ed3eb12790a8d1160fa64d44572ee0149db7ad4eb97a675e814b2ff99530c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:33Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:33Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"}]},{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://faf6e6f12e47f3a7ebd9432b3cfb9d378fc0de1d40118e505515f9717e367621\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:34Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:34Z\\\"}}},{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:07b7c6877441ecd6a5646fb68e33e9be8b90092272e49117b54b4a67314731ca\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://5b7ab88a7d78368e0154869405059f9d4e66456b70945f39107097f36f815092\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:35Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:35Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:33Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.200384 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-vrzqb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ef543e1b-8068-4ea3-b32a-61027b32e95d\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://dd3d142abfeb20fcd199d9ab57a832e160245f757eb601fd19836fbff8fa10f1\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:55Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://465196e042e81c7805782ccad9a638d7ffd98eda95e7dd771bdb6342fd110182\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:54Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-s2kz5\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-vrzqb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.220772 4725 status_manager.go:875] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"9143f3c2-a068-494d-b7e1-4200c04394a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:57Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-20T11:04:55Z\\\",\\\"message\\\":\\\"containers with unready status: [ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:242b3d66438c42745f4ef318bdeaf3d793426f12962a42ea83e18d06c08aaf09\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:59Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:58Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:04:57Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241\\\",\\\"exitCode\\\":1,\\\"finishedAt\\\":\\\"2026-01-20T11:06:00Z\\\",\\\"message\\\":\\\"le to create admin network policy controller, err: could not add Event Handler for anpInformer during admin network policy controller initialization, handler {0x1fcc6e0 0x1fcc3c0 0x1fcc360} was not added to shared informer because it has stopped already, failed to start node network controller: failed to start default node network controller: failed to set node crc annotations: Internal error occurred: failed calling webhook \\\\\\\"node.network-node-identity.openshift.io\\\\\\\": failed to call webhook: Post \\\\\\\"https://127.0.0.1:9743/node?timeout=10s\\\\\\\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:05:59Z is after 2025-08-24T17:21:41Z]\\\\nI0120 11:05:59.983920 6721 services_controller.go:473] Services do not match for network=default, existing lbs: []services.LB{services.LB{Name:\\\\\\\"Service_openshift-cluster-version/cluster-version-operator_TCP_cluster\\\\\\\", UUID:\\\\\\\"61d39e4d-21a9-4387-9a2b-fa4ad14792e2\\\\\\\", Protocol:\\\\\\\"tcp\\\\\\\", ExternalIDs:map[string]string{\\\\\\\"k8s.ovn.org/kind\\\\\\\":\\\\\\\"Service\\\\\\\", \\\\\\\"k8s.ovn.org/owner\\\\\\\":\\\\\\\"openshift-cluster-version/cluster-version-operator\\\\\\\"}, Opts:services.LBOpts{Reject:false, EmptyLBEvents:false, AffinityTimeOut:0, SkipSNAT:false, Template:false, AddressFamily:\\\\\\\"\\\\\\\"}, Rules:[]services.LBRule{}, Templates:services.TemplateMap{}, Switches:[]string{}, Routers:[]string{}, Groups:[]string{\\\\\\\"clusterLB\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-20T11:05:58Z\\\"}},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"containerID\\\":\\\"cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-20T11:05:01Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174f36cdd47ef0d1d2099482919d773257453265a2af0b17b154edc32fa41ac2\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":true,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-20T11:04:56Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-20T11:04:56Z\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fsm7k\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-20T11:04:55Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-nz9p5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": tls: failed to verify certificate: x509: certificate has expired or is not yet valid: current time 2026-01-20T11:06:23Z is after 2025-08-24T17:21:41Z" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268856 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268872 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.268883 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.370929 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.370967 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.370984 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.371002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.371013 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.473841 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474232 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474500 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474747 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.474961 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577644 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577730 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577741 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577950 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.577970 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681264 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681321 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.681348 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.783993 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784027 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784035 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784050 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.784059 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887037 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887098 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887109 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887124 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.887135 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.931527 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.931534 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.931620 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932009 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932220 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932265 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.932356 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:23 crc kubenswrapper[4725]: E0120 11:06:23.932741 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989615 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989677 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989694 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:23 crc kubenswrapper[4725]: I0120 11:06:23.989707 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:23Z","lastTransitionTime":"2026-01-20T11:06:23Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092052 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092107 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092144 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.092154 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.129384 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-04 21:59:44.190824728 +0000 UTC Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195156 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195166 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195181 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.195194 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303327 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303395 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303409 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303447 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.303468 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405602 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405643 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405655 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.405682 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508416 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508559 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508571 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508589 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.508603 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611564 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611638 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611665 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.611677 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714161 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714200 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714219 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714233 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.714244 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817376 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.817399 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919308 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919322 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.919332 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:24Z","lastTransitionTime":"2026-01-20T11:06:24Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:24 crc kubenswrapper[4725]: I0120 11:06:24.932394 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:24 crc kubenswrapper[4725]: E0120 11:06:24.932533 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021584 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021627 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021654 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.021668 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125227 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125265 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125277 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.125302 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.130568 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-11-16 12:39:42.676371418 +0000 UTC Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227765 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227832 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227843 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227863 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.227875 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.330906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331240 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331691 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331802 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.331902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435567 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435626 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435639 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435656 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.435668 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539209 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539501 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539591 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539698 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.539788 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642220 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642270 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642281 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642298 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.642309 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.744837 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745517 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.745626 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.848704 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849002 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849134 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.849298 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.932027 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:25 crc kubenswrapper[4725]: E0120 11:06:25.932251 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.932336 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:25 crc kubenswrapper[4725]: E0120 11:06:25.932547 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.932568 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:25 crc kubenswrapper[4725]: E0120 11:06:25.932994 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952616 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952696 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952714 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952740 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:25 crc kubenswrapper[4725]: I0120 11:06:25.952766 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:25Z","lastTransitionTime":"2026-01-20T11:06:25Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.055963 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056028 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056044 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.056054 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.131329 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2026-01-11 19:31:26.771030937 +0000 UTC Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163920 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163965 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.163998 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.164011 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267296 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267338 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267348 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267364 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.267376 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370803 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370842 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370867 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.370878 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.472979 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473019 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473029 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473043 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.473053 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576254 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576289 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576299 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576313 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.576323 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679130 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679175 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679187 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679202 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.679216 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.781592 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.781935 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.782039 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.782152 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.782226 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884800 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884845 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884860 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.884871 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.931861 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:26 crc kubenswrapper[4725]: E0120 11:06:26.932307 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987379 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987390 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987407 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:26 crc kubenswrapper[4725]: I0120 11:06:26.987418 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:26Z","lastTransitionTime":"2026-01-20T11:06:26Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089833 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089846 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089888 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.089903 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.131976 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-02 22:47:18.757825445 +0000 UTC Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191906 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191947 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191957 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191977 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.191989 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294469 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294541 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294565 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294600 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.294636 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.397795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398026 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398046 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398070 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.398119 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500791 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500889 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.500902 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604201 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604247 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604258 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604293 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.604310 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706852 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706865 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706882 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.706894 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.809797 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810108 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810224 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810331 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.810421 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913123 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913182 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913199 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913223 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.913240 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:27Z","lastTransitionTime":"2026-01-20T11:06:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.931662 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.931755 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:27 crc kubenswrapper[4725]: E0120 11:06:27.931825 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:27 crc kubenswrapper[4725]: I0120 11:06:27.931864 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:27 crc kubenswrapper[4725]: E0120 11:06:27.931890 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:27 crc kubenswrapper[4725]: E0120 11:06:27.932033 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015485 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015533 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015545 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015561 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.015573 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119105 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119167 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119186 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119206 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.119220 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.132484 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-25 19:08:33.273115752 +0000 UTC Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.222456 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.222766 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.222896 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.223413 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.223544 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326518 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326569 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326593 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326621 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.326643 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429758 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429795 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429805 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429819 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.429829 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.532854 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.532959 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.532986 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.533016 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.533038 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636122 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636228 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636304 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636333 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.636397 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740439 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740470 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740480 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740494 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.740502 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843372 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843415 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843425 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843443 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.843454 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.931653 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:28 crc kubenswrapper[4725]: E0120 11:06:28.931869 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945605 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945658 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945671 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945690 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.945703 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962636 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962688 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962709 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962767 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeNotReady" Jan 20 11:06:28 crc kubenswrapper[4725]: I0120 11:06:28.962780 4725 setters.go:603] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-20T11:06:28Z","lastTransitionTime":"2026-01-20T11:06:28Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.050397 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx"] Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.050919 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.056753 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.056764 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.058475 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.058603 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.091749 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=65.091727462 podStartE2EDuration="1m5.091727462s" podCreationTimestamp="2026-01-20 11:05:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.09166364 +0000 UTC m=+117.299985633" watchObservedRunningTime="2026-01-20 11:06:29.091727462 +0000 UTC m=+117.300049435" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.091956 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=96.091950899 podStartE2EDuration="1m36.091950899s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.076644888 +0000 UTC m=+117.284966861" watchObservedRunningTime="2026-01-20 11:06:29.091950899 +0000 UTC m=+117.300272862" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.133292 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2026-02-24 05:53:03 +0000 UTC, rotation deadline is 2025-12-23 03:35:52.987331142 +0000 UTC Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.133376 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Rotating certificates Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.139345 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podStartSLOduration=95.139329574 podStartE2EDuration="1m35.139329574s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.139033464 +0000 UTC m=+117.347355437" watchObservedRunningTime="2026-01-20 11:06:29.139329574 +0000 UTC m=+117.347651547" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.145106 4725 reflector.go:368] Caches populated for *v1.CertificateSigningRequest from k8s.io/client-go/tools/watch/informerwatcher.go:146 Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152521 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35389854-308b-4f28-9ac3-a41e20853c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152572 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35389854-308b-4f28-9ac3-a41e20853c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152696 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.152728 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35389854-308b-4f28-9ac3-a41e20853c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.174373 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-vchwb" podStartSLOduration=95.174348548 podStartE2EDuration="1m35.174348548s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.174116951 +0000 UTC m=+117.382438944" watchObservedRunningTime="2026-01-20 11:06:29.174348548 +0000 UTC m=+117.382670521" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.174584 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-z7f69" podStartSLOduration=95.174578666 podStartE2EDuration="1m35.174578666s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.157307155 +0000 UTC m=+117.365629138" watchObservedRunningTime="2026-01-20 11:06:29.174578666 +0000 UTC m=+117.382900639" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.185956 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=30.185937544 podStartE2EDuration="30.185937544s" podCreationTimestamp="2026-01-20 11:05:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.185347586 +0000 UTC m=+117.393669559" watchObservedRunningTime="2026-01-20 11:06:29.185937544 +0000 UTC m=+117.394259517" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253721 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35389854-308b-4f28-9ac3-a41e20853c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253888 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35389854-308b-4f28-9ac3-a41e20853c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35389854-308b-4f28-9ac3-a41e20853c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.253930 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-cvo-updatepayloads\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.254024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/35389854-308b-4f28-9ac3-a41e20853c06-etc-ssl-certs\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.254866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/35389854-308b-4f28-9ac3-a41e20853c06-service-ca\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.263316 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-c9dck" podStartSLOduration=96.263296029 podStartE2EDuration="1m36.263296029s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.228012726 +0000 UTC m=+117.436334699" watchObservedRunningTime="2026-01-20 11:06:29.263296029 +0000 UTC m=+117.471618002" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.267574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/35389854-308b-4f28-9ac3-a41e20853c06-serving-cert\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.272618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/35389854-308b-4f28-9ac3-a41e20853c06-kube-api-access\") pod \"cluster-version-operator-5c965bbfc6-95pnx\" (UID: \"35389854-308b-4f28-9ac3-a41e20853c06\") " pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.279611 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-fv2jh" podStartSLOduration=96.279590511 podStartE2EDuration="1m36.279590511s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.263530537 +0000 UTC m=+117.471852510" watchObservedRunningTime="2026-01-20 11:06:29.279590511 +0000 UTC m=+117.487912484" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.294163 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-749d76644c-8ls4r" podStartSLOduration=93.294145947 podStartE2EDuration="1m33.294145947s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.293438556 +0000 UTC m=+117.501760539" watchObservedRunningTime="2026-01-20 11:06:29.294145947 +0000 UTC m=+117.502467920" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.348183 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=96.348165836 podStartE2EDuration="1m36.348165836s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.347191476 +0000 UTC m=+117.555513449" watchObservedRunningTime="2026-01-20 11:06:29.348165836 +0000 UTC m=+117.556487809" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.366715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.382862 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=96.382840881 podStartE2EDuration="1m36.382840881s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:29.382505531 +0000 UTC m=+117.590827524" watchObservedRunningTime="2026-01-20 11:06:29.382840881 +0000 UTC m=+117.591162854" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.931912 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.931928 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:29 crc kubenswrapper[4725]: I0120 11:06:29.932456 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:29 crc kubenswrapper[4725]: E0120 11:06:29.932577 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:29 crc kubenswrapper[4725]: E0120 11:06:29.932657 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:29 crc kubenswrapper[4725]: E0120 11:06:29.932727 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.167207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" event={"ID":"35389854-308b-4f28-9ac3-a41e20853c06","Type":"ContainerStarted","Data":"09ae986a64fe961c1b762568a3457e61a43a64c207922c968b75267161d978da"} Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.167267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" event={"ID":"35389854-308b-4f28-9ac3-a41e20853c06","Type":"ContainerStarted","Data":"702b932997e2135b2cad23835aca9243e0293ab5e7aa7c6aaa4d5a7bdfcb0d15"} Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.186741 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-5c965bbfc6-95pnx" podStartSLOduration=96.186718124 podStartE2EDuration="1m36.186718124s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:30.186684524 +0000 UTC m=+118.395006497" watchObservedRunningTime="2026-01-20 11:06:30.186718124 +0000 UTC m=+118.395040097" Jan 20 11:06:30 crc kubenswrapper[4725]: I0120 11:06:30.931644 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:30 crc kubenswrapper[4725]: E0120 11:06:30.932168 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:31 crc kubenswrapper[4725]: I0120 11:06:31.931317 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:31 crc kubenswrapper[4725]: I0120 11:06:31.931459 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:31 crc kubenswrapper[4725]: E0120 11:06:31.931552 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:31 crc kubenswrapper[4725]: I0120 11:06:31.931352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:31 crc kubenswrapper[4725]: E0120 11:06:31.931693 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:31 crc kubenswrapper[4725]: E0120 11:06:31.931875 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:32 crc kubenswrapper[4725]: I0120 11:06:32.931593 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:32 crc kubenswrapper[4725]: E0120 11:06:32.932958 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:32 crc kubenswrapper[4725]: E0120 11:06:32.952213 4725 kubelet_node_status.go:497] "Node not becoming ready in time after startup" Jan 20 11:06:33 crc kubenswrapper[4725]: E0120 11:06:33.010850 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:33.931786 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:33.931918 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:33.932024 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:33.931810 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:33.932108 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:33.932278 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:34 crc kubenswrapper[4725]: I0120 11:06:34.931668 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:34 crc kubenswrapper[4725]: E0120 11:06:34.931862 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856144 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856761 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/0.log" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856806 4725 generic.go:334] "Generic (PLEG): container finished" podID="627f7c97-4173-413f-a90e-e2c5e058c53b" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" exitCode=1 Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856838 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerDied","Data":"31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6"} Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.856872 4725 scope.go:117] "RemoveContainer" containerID="60700e135b34653b1d5bd672ec00a83c4f77886f25fbe6323ffbf268b44fb3ad" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.857355 4725 scope.go:117] "RemoveContainer" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.857547 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-multus pod=multus-vchwb_openshift-multus(627f7c97-4173-413f-a90e-e2c5e058c53b)\"" pod="openshift-multus/multus-vchwb" podUID="627f7c97-4173-413f-a90e-e2c5e058c53b" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.932030 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.932068 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:35 crc kubenswrapper[4725]: I0120 11:06:35.932180 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.932284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.932392 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:35 crc kubenswrapper[4725]: E0120 11:06:35.932465 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:36 crc kubenswrapper[4725]: I0120 11:06:36.865025 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:06:36 crc kubenswrapper[4725]: I0120 11:06:36.931715 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:36 crc kubenswrapper[4725]: E0120 11:06:36.931867 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:36 crc kubenswrapper[4725]: I0120 11:06:36.933038 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:36 crc kubenswrapper[4725]: E0120 11:06:36.933359 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"ovnkube-controller\" with CrashLoopBackOff: \"back-off 40s restarting failed container=ovnkube-controller pod=ovnkube-node-nz9p5_openshift-ovn-kubernetes(9143f3c2-a068-494d-b7e1-4200c04394a3)\"" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" Jan 20 11:06:37 crc kubenswrapper[4725]: I0120 11:06:37.931372 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:37 crc kubenswrapper[4725]: I0120 11:06:37.931460 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:37 crc kubenswrapper[4725]: E0120 11:06:37.931508 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:37 crc kubenswrapper[4725]: E0120 11:06:37.931604 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:37 crc kubenswrapper[4725]: I0120 11:06:37.931396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:37 crc kubenswrapper[4725]: E0120 11:06:37.931681 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:38 crc kubenswrapper[4725]: E0120 11:06:38.011916 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:38 crc kubenswrapper[4725]: I0120 11:06:38.931740 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:38 crc kubenswrapper[4725]: E0120 11:06:38.931898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:39 crc kubenswrapper[4725]: I0120 11:06:39.931640 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:39 crc kubenswrapper[4725]: I0120 11:06:39.931696 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:39 crc kubenswrapper[4725]: E0120 11:06:39.931791 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:39 crc kubenswrapper[4725]: I0120 11:06:39.931696 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:39 crc kubenswrapper[4725]: E0120 11:06:39.931876 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:39 crc kubenswrapper[4725]: E0120 11:06:39.931961 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:40 crc kubenswrapper[4725]: I0120 11:06:40.931316 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:40 crc kubenswrapper[4725]: E0120 11:06:40.931460 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:41 crc kubenswrapper[4725]: I0120 11:06:41.932130 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:41 crc kubenswrapper[4725]: I0120 11:06:41.932157 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:41 crc kubenswrapper[4725]: I0120 11:06:41.932121 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:41 crc kubenswrapper[4725]: E0120 11:06:41.932360 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:41 crc kubenswrapper[4725]: E0120 11:06:41.932514 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:41 crc kubenswrapper[4725]: E0120 11:06:41.932619 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:42 crc kubenswrapper[4725]: I0120 11:06:42.932605 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:42 crc kubenswrapper[4725]: E0120 11:06:42.935144 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.013208 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:43 crc kubenswrapper[4725]: I0120 11:06:43.931470 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:43 crc kubenswrapper[4725]: I0120 11:06:43.931495 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.931663 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.931773 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:43 crc kubenswrapper[4725]: I0120 11:06:43.931516 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:43 crc kubenswrapper[4725]: E0120 11:06:43.931874 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:44 crc kubenswrapper[4725]: I0120 11:06:44.931354 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:44 crc kubenswrapper[4725]: E0120 11:06:44.931496 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:45 crc kubenswrapper[4725]: I0120 11:06:45.931906 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:45 crc kubenswrapper[4725]: I0120 11:06:45.931963 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:45 crc kubenswrapper[4725]: I0120 11:06:45.931898 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:45 crc kubenswrapper[4725]: E0120 11:06:45.932205 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:45 crc kubenswrapper[4725]: E0120 11:06:45.932379 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:45 crc kubenswrapper[4725]: E0120 11:06:45.932597 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:46 crc kubenswrapper[4725]: I0120 11:06:46.931680 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:46 crc kubenswrapper[4725]: E0120 11:06:46.931970 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:47 crc kubenswrapper[4725]: I0120 11:06:47.931764 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:47 crc kubenswrapper[4725]: E0120 11:06:47.931985 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:47 crc kubenswrapper[4725]: I0120 11:06:47.931798 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:47 crc kubenswrapper[4725]: E0120 11:06:47.932160 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:47 crc kubenswrapper[4725]: I0120 11:06:47.931772 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:47 crc kubenswrapper[4725]: E0120 11:06:47.932255 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:48 crc kubenswrapper[4725]: E0120 11:06:48.014412 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:48 crc kubenswrapper[4725]: I0120 11:06:48.931627 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:48 crc kubenswrapper[4725]: E0120 11:06:48.932124 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.931832 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.931928 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:49 crc kubenswrapper[4725]: E0120 11:06:49.932183 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.932219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:49 crc kubenswrapper[4725]: E0120 11:06:49.932310 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:49 crc kubenswrapper[4725]: E0120 11:06:49.932435 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:49 crc kubenswrapper[4725]: I0120 11:06:49.932738 4725 scope.go:117] "RemoveContainer" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.920454 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.920575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5"} Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.933718 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:50 crc kubenswrapper[4725]: E0120 11:06:50.934012 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:50 crc kubenswrapper[4725]: I0120 11:06:50.936174 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.926232 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.929461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerStarted","Data":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.930036 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.931221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.931228 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:51 crc kubenswrapper[4725]: E0120 11:06:51.931332 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:51 crc kubenswrapper[4725]: I0120 11:06:51.931233 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:51 crc kubenswrapper[4725]: E0120 11:06:51.931427 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:51 crc kubenswrapper[4725]: E0120 11:06:51.931613 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:52 crc kubenswrapper[4725]: I0120 11:06:52.007999 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podStartSLOduration=118.007979551 podStartE2EDuration="1m58.007979551s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:06:52.007887508 +0000 UTC m=+140.216209581" watchObservedRunningTime="2026-01-20 11:06:52.007979551 +0000 UTC m=+140.216301544" Jan 20 11:06:52 crc kubenswrapper[4725]: I0120 11:06:52.009162 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5lfc4"] Jan 20 11:06:52 crc kubenswrapper[4725]: I0120 11:06:52.009283 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:52 crc kubenswrapper[4725]: E0120 11:06:52.009394 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.015767 4725 kubelet.go:2916] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931895 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:53 crc kubenswrapper[4725]: I0120 11:06:53.931895 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932788 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932833 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932852 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:53 crc kubenswrapper[4725]: E0120 11:06:53.932861 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931615 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931685 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931717 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:55 crc kubenswrapper[4725]: I0120 11:06:55.931779 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.932870 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.932968 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.933057 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:55 crc kubenswrapper[4725]: E0120 11:06:55.933151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:56 crc kubenswrapper[4725]: I0120 11:06:56.727515 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:06:56 crc kubenswrapper[4725]: I0120 11:06:56.727664 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931406 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931496 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931535 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:57 crc kubenswrapper[4725]: I0120 11:06:57.931424 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931638 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-xd92c" podUID="3b6479f0-333b-4a96-9adf-2099afdc2447" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931669 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" podUID="9d751cbb-f2e2-430d-9754-c882a5e924a5" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931795 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" podUID="5fe485a1-e14f-4c09-b5b9-f252bc42b7e8" Jan 20 11:06:57 crc kubenswrapper[4725]: E0120 11:06:57.931894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-5lfc4" podUID="a5d55efc-e85a-4a02-a4ce-7355df9fea66" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.473367 4725 kubelet_node_status.go:724] "Recording event message for node" node="crc" event="NodeReady" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.527411 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-twkw7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.528149 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.530311 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhz9f"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.530957 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.531255 4725 reflector.go:561] object-"openshift-apiserver"/"etcd-client": failed to list *v1.Secret: secrets "etcd-client" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.531317 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-client\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"etcd-client\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.532680 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.533297 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.534265 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.534667 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.535628 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5fgr9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.536207 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.536995 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.537658 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.538608 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.539116 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541487 4725 reflector.go:561] object-"openshift-apiserver"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541533 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541543 4725 reflector.go:561] object-"openshift-apiserver"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541602 4725 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-images": failed to list *v1.ConfigMap: configmaps "machine-api-operator-images" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541618 4725 reflector.go:561] object-"openshift-apiserver"/"etcd-serving-ca": failed to list *v1.ConfigMap: configmaps "etcd-serving-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541627 4725 reflector.go:561] object-"openshift-apiserver"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541615 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541647 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"etcd-serving-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"etcd-serving-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541649 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-images\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"machine-api-operator-images\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541667 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541631 4725 reflector.go:561] object-"openshift-apiserver"/"audit-1": failed to list *v1.ConfigMap: configmaps "audit-1" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541686 4725 reflector.go:561] object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff": failed to list *v1.Secret: secrets "openshift-apiserver-sa-dockercfg-djjff" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541707 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"audit-1\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"audit-1\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541718 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-djjff\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-apiserver-sa-dockercfg-djjff\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541543 4725 reflector.go:561] object-"openshift-apiserver"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541757 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.541783 4725 reflector.go:561] object-"openshift-apiserver"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.541813 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.542007 4725 reflector.go:561] object-"openshift-apiserver"/"image-import-ca": failed to list *v1.ConfigMap: configmaps "image-import-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.542034 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"image-import-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"image-import-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.543217 4725 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7": failed to list *v1.Secret: secrets "machine-api-operator-dockercfg-mfbb7" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.543254 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-mfbb7\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-dockercfg-mfbb7\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.543885 4725 reflector.go:561] object-"openshift-apiserver"/"encryption-config-1": failed to list *v1.Secret: secrets "encryption-config-1" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-apiserver": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.543914 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-apiserver\"/\"encryption-config-1\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"encryption-config-1\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-apiserver\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.543992 4725 reflector.go:561] object-"openshift-machine-api"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.544014 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.544990 4725 reflector.go:561] object-"openshift-machine-api"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.545033 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.547250 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/console-f9d7485db-75nfb"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.547899 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.549109 4725 reflector.go:561] object-"openshift-machine-api"/"machine-api-operator-tls": failed to list *v1.Secret: secrets "machine-api-operator-tls" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.549159 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"machine-api-operator-tls\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"machine-api-operator-tls\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.551755 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.552646 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.553735 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.554357 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.554994 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.555790 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.556732 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.557175 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.560135 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.560706 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.560803 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.561396 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-7954f5f757-2hmdd"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.561751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.562030 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.562529 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g28q4"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.563114 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.564812 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vc6c2"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.565597 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.565711 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.566363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.570733 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5fj5p"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.571728 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572704 4725 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj": failed to list *v1.Secret: secrets "authentication-operator-dockercfg-mz9bj" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572758 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-mz9bj\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"authentication-operator-dockercfg-mz9bj\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572773 4725 reflector.go:561] object-"openshift-authentication-operator"/"service-ca-bundle": failed to list *v1.ConfigMap: configmaps "service-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572808 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"service-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"service-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572708 4725 reflector.go:561] object-"openshift-route-controller-manager"/"config": failed to list *v1.ConfigMap: configmaps "config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.572831 4725 reflector.go:561] object-"openshift-authentication-operator"/"trusted-ca-bundle": failed to list *v1.ConfigMap: configmaps "trusted-ca-bundle" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572888 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"trusted-ca-bundle\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.572841 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573132 4725 reflector.go:561] object-"openshift-authentication-operator"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573140 4725 reflector.go:561] object-"openshift-route-controller-manager"/"serving-cert": failed to list *v1.Secret: secrets "serving-cert" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-route-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573163 4725 reflector.go:561] object-"openshift-machine-api"/"kube-rbac-proxy": failed to list *v1.ConfigMap: configmaps "kube-rbac-proxy" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-machine-api": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573164 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573164 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-route-controller-manager\"/\"serving-cert\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"serving-cert\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-route-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573185 4725 reflector.go:561] object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c": failed to list *v1.Secret: secrets "openshift-controller-manager-sa-dockercfg-msq4c" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573216 4725 reflector.go:561] object-"openshift-authentication-operator"/"authentication-operator-config": failed to list *v1.ConfigMap: configmaps "authentication-operator-config" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573234 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-msq4c\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"openshift-controller-manager-sa-dockercfg-msq4c\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573192 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-machine-api\"/\"kube-rbac-proxy\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-rbac-proxy\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-machine-api\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573253 4725 reflector.go:561] object-"openshift-authentication-operator"/"openshift-service-ca.crt": failed to list *v1.ConfigMap: configmaps "openshift-service-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573282 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"openshift-service-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573251 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"authentication-operator-config\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"authentication-operator-config\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573357 4725 reflector.go:561] object-"openshift-authentication-operator"/"kube-root-ca.crt": failed to list *v1.ConfigMap: configmaps "kube-root-ca.crt" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-authentication-operator": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573380 4725 reflector.go:561] object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template": failed to list *v1.Secret: secrets "v4-0-config-system-ocp-branding-template" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573386 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-authentication-operator\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573405 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-system-ocp-branding-template\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573517 4725 reflector.go:561] object-"openshift-controller-manager"/"client-ca": failed to list *v1.ConfigMap: configmaps "client-ca" is forbidden: User "system:node:crc" cannot list resource "configmaps" in API group "" in the namespace "openshift-controller-manager": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573538 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-controller-manager\"/\"client-ca\": Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps \"client-ca\" is forbidden: User \"system:node:crc\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"openshift-controller-manager\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573615 4725 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-login": failed to list *v1.Secret: secrets "v4-0-config-user-template-login" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: W0120 11:06:59.573630 4725 reflector.go:561] object-"openshift-authentication"/"v4-0-config-user-template-provider-selection": failed to list *v1.Secret: secrets "v4-0-config-user-template-provider-selection" is forbidden: User "system:node:crc" cannot list resource "secrets" in API group "" in the namespace "openshift-authentication": no relationship found between node 'crc' and this object Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573634 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-login\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: E0120 11:06:59.573646 4725 reflector.go:158] "Unhandled Error" err="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"v4-0-config-user-template-provider-selection\" is forbidden: User \"system:node:crc\" cannot list resource \"secrets\" in API group \"\" in the namespace \"openshift-authentication\": no relationship found between node 'crc' and this object" logger="UnhandledError" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.573734 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.573758 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.574706 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.575014 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.575255 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.575718 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-5444994796-nxchh"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.578211 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.580671 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.580674 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.580836 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581028 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581266 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581410 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581617 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581825 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581882 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.581836 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.582116 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.582538 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.593895 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.596832 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.596871 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.597344 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.597792 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.598347 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599105 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599134 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599217 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599300 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599349 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599464 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599510 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599463 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599594 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599628 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599643 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599760 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599796 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599878 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599873 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599937 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599976 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599996 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599882 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599944 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600068 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600094 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600134 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.599904 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600210 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600225 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600244 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600327 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600405 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600501 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600505 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600574 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600588 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600651 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600681 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600731 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600743 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600765 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600830 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.600838 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601009 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601032 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601140 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601689 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601808 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.601827 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.602612 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603015 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603282 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603576 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.603858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.604122 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.604443 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.604462 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.605910 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.609563 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.610363 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.610979 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.611644 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.613922 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.616296 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.616566 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.616714 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.617338 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.617964 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.618916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.622039 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.623127 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.623229 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.628951 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.629649 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.630460 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.634309 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.634976 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.635415 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.638151 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.640037 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.641553 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.644334 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.644764 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.645945 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.664043 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665472 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-encryption-config\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665524 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-65hm8\" (UniqueName: \"kubernetes.io/projected/2216efbd-f6b4-4579-a94a-18c5177df641-kube-api-access-65hm8\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665570 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665648 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-client\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665692 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2216efbd-f6b4-4579-a94a-18c5177df641-audit-dir\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665756 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-audit-policies\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-serving-cert\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.665801 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.666012 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psvt7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.666947 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.668484 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7j2sn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.669556 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhz9f"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.669671 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.671420 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.674323 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.675036 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vt8w"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.676148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.676260 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.676772 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.686709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-twkw7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.688271 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.691573 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.693773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5fgr9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.703234 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.705397 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hmdd"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.705453 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.706717 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.711952 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.715236 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.715289 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.722364 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.723404 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.724417 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.725398 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g28q4"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.726451 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-x85nm"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.727179 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x85nm" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.727641 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.729025 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.729752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.731736 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.732758 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.733960 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.735150 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-75nfb"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.736435 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.737491 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.738335 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.738518 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.739776 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.741276 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5fj5p"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.742262 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.742991 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.743914 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vc6c2"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.748686 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.749934 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7j2sn"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.752583 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vt8w"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.753741 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.753799 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x85nm"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.808933 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.809632 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.810284 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.812251 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-audit-policies\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814838 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-serving-cert\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814857 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-encryption-config\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814968 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-65hm8\" (UniqueName: \"kubernetes.io/projected/2216efbd-f6b4-4579-a94a-18c5177df641-kube-api-access-65hm8\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.814993 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815147 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2216efbd-f6b4-4579-a94a-18c5177df641-audit-dir\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815170 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-client\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815888 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-audit-policies\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.815971 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psvt7"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.816607 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-serving-ca\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.817211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2216efbd-f6b4-4579-a94a-18c5177df641-trusted-ca-bundle\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.817239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/2216efbd-f6b4-4579-a94a-18c5177df641-audit-dir\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.819339 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-kkxct"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.820704 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.821332 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-encryption-config\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.821993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-serving-cert\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.822136 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-4s7gv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.824156 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/2216efbd-f6b4-4579-a94a-18c5177df641-etcd-client\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.824435 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.825294 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4s7gv"] Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.832817 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.851004 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.870575 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.891182 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.910548 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931219 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931263 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931316 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.931227 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.932144 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.951158 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.970661 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 11:06:59 crc kubenswrapper[4725]: I0120 11:06:59.990172 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.036818 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.050565 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.070024 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.090778 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.111049 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.132154 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.150840 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.171336 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.190843 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.211489 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.230658 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.251451 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.270402 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.291855 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.310503 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.331399 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.350565 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.370997 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.390689 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.411572 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.431508 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.451517 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.470345 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.491127 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.510903 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.531301 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.550788 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.571257 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.590861 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.611262 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.628873 4725 request.go:700] Waited for 1.016918424s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.631179 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.652290 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.672027 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.701492 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.710449 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.731206 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.749922 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.770864 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.790581 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.831817 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.850701 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.870719 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.890606 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.911415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.930763 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.950810 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.971268 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 11:07:00 crc kubenswrapper[4725]: I0120 11:07:00.990592 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.010205 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.031241 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.051870 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.071244 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.090451 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.110661 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.131583 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.151133 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.172429 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.192374 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.260924 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.262855 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.263095 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.269977 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.290981 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.311435 4725 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.330855 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.351147 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.370938 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.391051 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.410185 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.446443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-65hm8\" (UniqueName: \"kubernetes.io/projected/2216efbd-f6b4-4579-a94a-18c5177df641-kube-api-access-65hm8\") pod \"apiserver-7bbb656c7d-8px9g\" (UID: \"2216efbd-f6b4-4579-a94a-18c5177df641\") " pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.451886 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.471262 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.490730 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.510865 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.530284 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.537318 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.552107 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.571502 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.590972 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.611316 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.628892 4725 request.go:700] Waited for 1.697253475s due to client-side throttling, not priority and fairness, request: GET:https://api-int.crc.testing:6443/api/v1/namespaces/openshift-network-diagnostics/configmaps?fieldSelector=metadata.name%3Dopenshift-service-ca.crt&limit=500&resourceVersion=0 Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.631332 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.653014 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.671342 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.691375 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.730266 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.751006 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.770976 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.790113 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.799787 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g"] Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.810668 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.854766 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.855051 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.871680 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.890375 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.911341 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.930548 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.956691 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.967594 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:01 crc kubenswrapper[4725]: E0120 11:07:01.967862 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:09:03.967825847 +0000 UTC m=+272.176147860 (durationBeforeRetry 2m2s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.967995 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" event={"ID":"2216efbd-f6b4-4579-a94a-18c5177df641","Type":"ContainerStarted","Data":"1f112b92d92e3d2506761e631a32c75251786020111776fd88a51ae894fe2f06"} Jan 20 11:07:01 crc kubenswrapper[4725]: I0120 11:07:01.970359 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.008598 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.010020 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.030975 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.070753 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.091245 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.110614 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.131690 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.151069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.170783 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.191580 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.211782 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.230794 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.256874 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.283962 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350274 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350313 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350323 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350586 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350660 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350708 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.350790 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.351214 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.351371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.351532 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.352227 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.852206207 +0000 UTC m=+151.060528200 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.352560 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.353116 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.354658 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.357335 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqllr\" (UniqueName: \"kubernetes.io/projected/3b6479f0-333b-4a96-9adf-2099afdc2447-kube-api-access-cqllr\") pod \"network-check-target-xd92c\" (UID: \"3b6479f0-333b-4a96-9adf-2099afdc2447\") " pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.357521 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-s2dwl\" (UniqueName: \"kubernetes.io/projected/9d751cbb-f2e2-430d-9754-c882a5e924a5-kube-api-access-s2dwl\") pod \"network-check-source-55646444c4-trplf\" (UID: \"9d751cbb-f2e2-430d-9754-c882a5e924a5\") " pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.358093 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-nginx-conf\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.452876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453174 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6wdq5\" (UniqueName: \"kubernetes.io/projected/7f131da2-d815-48eb-b2ab-7f6df6a4039a-kube-api-access-6wdq5\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.453305 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.953208322 +0000 UTC m=+151.161530295 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453401 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-auth-proxy-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453462 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.453531 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-metrics-tls\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.455722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w5m4f\" (UniqueName: \"kubernetes.io/projected/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-kube-api-access-w5m4f\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456027 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpntm\" (UniqueName: \"kubernetes.io/projected/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-kube-api-access-tpntm\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456238 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-service-ca\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456323 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2efafa7a-ca64-4166-a72b-9b70b86953ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456363 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6dqmj\" (UniqueName: \"kubernetes.io/projected/2efafa7a-ca64-4166-a72b-9b70b86953ad-kube-api-access-6dqmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456387 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2efafa7a-ca64-4166-a72b-9b70b86953ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456424 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-serving-cert\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456493 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456595 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-serving-cert\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456615 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456654 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456671 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rpkjz\" (UniqueName: \"kubernetes.io/projected/e3e30f02-3956-427a-a1f3-6e1d51f242d6-kube-api-access-rpkjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456688 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-metrics-certs\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456710 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456726 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456745 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df8c05f-b523-439b-908b-c4f34b22b7e9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.456908 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwl4m\" (UniqueName: \"kubernetes.io/projected/808fb947-228d-42c4-ba11-480348f80d8a-kube-api-access-lwl4m\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457023 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2wmnh\" (UniqueName: \"kubernetes.io/projected/ac3b56d0-256f-40f8-b2ff-2271f82ff750-kube-api-access-2wmnh\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457108 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457127 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-config\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457530 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457599 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457653 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-config\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457698 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-node-pullsecrets\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457744 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbd48\" (UniqueName: \"kubernetes.io/projected/cf2d94b1-aa78-4a9d-8e32-232f92ec8988-kube-api-access-qbd48\") pod \"migrator-59844c95c7-rlw62\" (UID: \"cf2d94b1-aa78-4a9d-8e32-232f92ec8988\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.457939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2gzxg\" (UniqueName: \"kubernetes.io/projected/cb0c9cf6-4966-4bd0-8933-823bc00e103c-kube-api-access-2gzxg\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458061 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-trusted-ca-bundle\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458108 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458157 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f27b4eea-081e-421a-83e9-8a5266163c53-serving-cert\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458181 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458204 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458244 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdplr\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-kube-api-access-qdplr\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458567 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-config\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458666 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-config\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458744 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458778 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb63abc7-f429-46c5-aa23-259063c394d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458798 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458819 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-default-certificate\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458842 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-client\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458862 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-75fg8\" (UniqueName: \"kubernetes.io/projected/d19058e6-30ec-474e-bada-73b4981a9b65-kube-api-access-75fg8\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458881 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-stats-auth\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458918 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-oauth-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458939 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-trusted-ca\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.458984 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58svc\" (UniqueName: \"kubernetes.io/projected/a8d4d608-4f73-4365-a535-71e712884eb9-kube-api-access-58svc\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459027 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-oauth-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459053 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e30f02-3956-427a-a1f3-6e1d51f242d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459200 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8d4d608-4f73-4365-a535-71e712884eb9-proxy-tls\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-serving-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-config\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459334 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df8c05f-b523-439b-908b-c4f34b22b7e9-proxy-tls\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459461 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459613 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.459680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460543 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvpn6\" (UniqueName: \"kubernetes.io/projected/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-kube-api-access-rvpn6\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460594 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460621 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460646 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19058e6-30ec-474e-bada-73b4981a9b65-service-ca-bundle\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460666 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d0ff97b-8da9-4156-a78b-9ebd6886313f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460700 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460719 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460740 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57xsn\" (UniqueName: \"kubernetes.io/projected/4df8c05f-b523-439b-908b-c4f34b22b7e9-kube-api-access-57xsn\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460762 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-images\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d0ff97b-8da9-4156-a78b-9ebd6886313f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460852 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-serving-cert\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460872 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit-dir\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460907 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-service-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460935 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/808fb947-228d-42c4-ba11-480348f80d8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460957 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.460982 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461005 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461032 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb63abc7-f429-46c5-aa23-259063c394d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461056 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-images\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461099 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461125 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-machine-approver-tls\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461175 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rh5wc\" (UniqueName: \"kubernetes.io/projected/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-kube-api-access-rh5wc\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461219 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-frvfh\" (UniqueName: \"kubernetes.io/projected/f27b4eea-081e-421a-83e9-8a5266163c53-kube-api-access-frvfh\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461251 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3e30f02-3956-427a-a1f3-6e1d51f242d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461270 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-config\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461313 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c4kqz\" (UniqueName: \"kubernetes.io/projected/6c5d8a1b-5c54-4877-8739-a83ab530197d-kube-api-access-c4kqz\") pod \"downloads-7954f5f757-2hmdd\" (UID: \"6c5d8a1b-5c54-4877-8739-a83ab530197d\") " pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461333 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-srv-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461357 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461383 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461423 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396ed454-f2c7-483a-8aad-0953041099b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dkqvr\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-kube-api-access-dkqvr\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461547 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-service-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461571 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9t9ww\" (UniqueName: \"kubernetes.io/projected/396ed454-f2c7-483a-8aad-0953041099b5-kube-api-access-9t9ww\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461607 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-image-import-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461626 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gkllc\" (UniqueName: \"kubernetes.io/projected/b8859d17-62ea-47b3-ac63-537e69ec9f90-kube-api-access-gkllc\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.461691 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-serving-cert\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462135 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462161 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-client\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462255 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-encryption-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462330 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.462396 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:02.962377771 +0000 UTC m=+151.170699794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462393 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.462473 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cghgt\" (UniqueName: \"kubernetes.io/projected/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-kube-api-access-cghgt\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463249 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463248 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396ed454-f2c7-483a-8aad-0953041099b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.463417 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.468410 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.469999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.483617 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.486356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.500126 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.529005 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.584569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.584862 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.084843412 +0000 UTC m=+151.293165385 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586406 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-cabundle\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586433 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-57xsn\" (UniqueName: \"kubernetes.io/projected/4df8c05f-b523-439b-908b-c4f34b22b7e9-kube-api-access-57xsn\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586458 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-images\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d0ff97b-8da9-4156-a78b-9ebd6886313f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586511 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-serving-cert\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586558 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit-dir\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586584 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222f710d-f6a2-48e7-9175-55b50f3aba30-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586609 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-service-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/808fb947-228d-42c4-ba11-480348f80d8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586682 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586706 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb63abc7-f429-46c5-aa23-259063c394d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586731 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eca1f8da-59f2-404e-a5e0-dbe1a191b885-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-images\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586776 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e1eba244-7c59-4933-ad4c-5dccc8fdc854-tmpfs\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586800 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586823 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-machine-approver-tls\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586890 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rh5wc\" (UniqueName: \"kubernetes.io/projected/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-kube-api-access-rh5wc\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586912 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2kwbw\" (UniqueName: \"kubernetes.io/projected/29ff5711-1e81-4ed0-8acd-6124100de37d-kube-api-access-2kwbw\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586933 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-frvfh\" (UniqueName: \"kubernetes.io/projected/f27b4eea-081e-421a-83e9-8a5266163c53-kube-api-access-frvfh\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586954 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3e30f02-3956-427a-a1f3-6e1d51f242d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.586977 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-config\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587003 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c4kqz\" (UniqueName: \"kubernetes.io/projected/6c5d8a1b-5c54-4877-8739-a83ab530197d-kube-api-access-c4kqz\") pod \"downloads-7954f5f757-2hmdd\" (UID: \"6c5d8a1b-5c54-4877-8739-a83ab530197d\") " pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587026 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-srv-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587102 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587128 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db710f25-e573-414c-9129-0dfa945d0b71-metrics-tls\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587153 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587178 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587236 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396ed454-f2c7-483a-8aad-0953041099b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587287 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmvj\" (UniqueName: \"kubernetes.io/projected/1bb3a268-d628-4c34-b9ca-38d43d82bf86-kube-api-access-7hmvj\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587309 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587331 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dkqvr\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-kube-api-access-dkqvr\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587374 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-service-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587394 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-apiservice-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587421 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9t9ww\" (UniqueName: \"kubernetes.io/projected/396ed454-f2c7-483a-8aad-0953041099b5-kube-api-access-9t9ww\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587444 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rlqg\" (UniqueName: \"kubernetes.io/projected/8428545d-e40d-4259-b579-ce7bff401888-kube-api-access-7rlqg\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587466 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-mountpoint-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ff5711-1e81-4ed0-8acd-6124100de37d-config\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-image-import-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587538 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-gkllc\" (UniqueName: \"kubernetes.io/projected/b8859d17-62ea-47b3-ac63-537e69ec9f90-kube-api-access-gkllc\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587561 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-serving-cert\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nc9nk\" (UniqueName: \"kubernetes.io/projected/876f0761-c4c3-42f7-81f8-9a26071a7676-kube-api-access-nc9nk\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587604 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f51665c-048e-4625-846b-872a367664e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587627 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587648 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-client\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587667 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-encryption-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587691 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587713 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cghgt\" (UniqueName: \"kubernetes.io/projected/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-kube-api-access-cghgt\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587734 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587754 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396ed454-f2c7-483a-8aad-0953041099b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587801 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587822 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6wdq5\" (UniqueName: \"kubernetes.io/projected/7f131da2-d815-48eb-b2ab-7f6df6a4039a-kube-api-access-6wdq5\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-auth-proxy-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587915 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587937 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-metrics-tls\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587958 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w5m4f\" (UniqueName: \"kubernetes.io/projected/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-kube-api-access-w5m4f\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.587979 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-srv-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpntm\" (UniqueName: \"kubernetes.io/projected/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-kube-api-access-tpntm\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588093 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-plugins-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-service-ca\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588157 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2efafa7a-ca64-4166-a72b-9b70b86953ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6dqmj\" (UniqueName: \"kubernetes.io/projected/2efafa7a-ca64-4166-a72b-9b70b86953ad-kube-api-access-6dqmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588220 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zcnj7\" (UniqueName: \"kubernetes.io/projected/eca1f8da-59f2-404e-a5e0-dbe1a191b885-kube-api-access-zcnj7\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2efafa7a-ca64-4166-a72b-9b70b86953ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-serving-cert\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588346 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b07c5d50-bb91-412d-b86a-3d736a16a81d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588374 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-registration-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-serving-cert\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588422 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588456 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588479 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rpkjz\" (UniqueName: \"kubernetes.io/projected/e3e30f02-3956-427a-a1f3-6e1d51f242d6-kube-api-access-rpkjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588511 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db710f25-e573-414c-9129-0dfa945d0b71-config-volume\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588582 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-metrics-certs\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-csi-data-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588631 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-certs\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588679 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ff5711-1e81-4ed0-8acd-6124100de37d-serving-cert\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588743 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df8c05f-b523-439b-908b-c4f34b22b7e9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588789 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkzb8\" (UniqueName: \"kubernetes.io/projected/e1eba244-7c59-4933-ad4c-5dccc8fdc854-kube-api-access-fkzb8\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588813 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588841 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lwl4m\" (UniqueName: \"kubernetes.io/projected/808fb947-228d-42c4-ba11-480348f80d8a-kube-api-access-lwl4m\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588864 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2wmnh\" (UniqueName: \"kubernetes.io/projected/ac3b56d0-256f-40f8-b2ff-2271f82ff750-kube-api-access-2wmnh\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588888 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588913 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-config\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588937 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588962 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.588988 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-config\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589012 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bbqpd\" (UniqueName: \"kubernetes.io/projected/6023e844-87d6-4f4d-bf86-a685b937cda5-kube-api-access-bbqpd\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589037 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-node-pullsecrets\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589059 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6023e844-87d6-4f4d-bf86-a685b937cda5-cert\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589097 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vsjkl\" (UniqueName: \"kubernetes.io/projected/db710f25-e573-414c-9129-0dfa945d0b71-kube-api-access-vsjkl\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589120 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpfs4\" (UniqueName: \"kubernetes.io/projected/38cb64e1-bd23-43eb-9eae-7c05f040640b-kube-api-access-dpfs4\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589145 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbd48\" (UniqueName: \"kubernetes.io/projected/cf2d94b1-aa78-4a9d-8e32-232f92ec8988-kube-api-access-qbd48\") pod \"migrator-59844c95c7-rlw62\" (UID: \"cf2d94b1-aa78-4a9d-8e32-232f92ec8988\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589173 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2gzxg\" (UniqueName: \"kubernetes.io/projected/cb0c9cf6-4966-4bd0-8933-823bc00e103c-kube-api-access-2gzxg\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589218 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-trusted-ca-bundle\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589254 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-node-bootstrap-token\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nfhjr\" (UniqueName: \"kubernetes.io/projected/3f51665c-048e-4625-846b-872a367664e5-kube-api-access-nfhjr\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589302 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-key\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589329 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589353 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f27b4eea-081e-421a-83e9-8a5266163c53-serving-cert\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589382 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589410 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tdkh9\" (UniqueName: \"kubernetes.io/projected/b07c5d50-bb91-412d-b86a-3d736a16a81d-kube-api-access-tdkh9\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589438 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589464 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qdplr\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-kube-api-access-qdplr\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589501 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-config\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589523 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-config\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589607 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-socket-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589631 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb63abc7-f429-46c5-aa23-259063c394d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589679 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-default-certificate\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589704 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-client\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-75fg8\" (UniqueName: \"kubernetes.io/projected/d19058e6-30ec-474e-bada-73b4981a9b65-kube-api-access-75fg8\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-stats-auth\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-oauth-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589799 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-trusted-ca\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589824 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589846 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-58svc\" (UniqueName: \"kubernetes.io/projected/a8d4d608-4f73-4365-a535-71e712884eb9-kube-api-access-58svc\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589872 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-oauth-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589921 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e30f02-3956-427a-a1f3-6e1d51f242d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589947 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.589973 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590002 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8d4d608-4f73-4365-a535-71e712884eb9-proxy-tls\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-serving-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-config\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590072 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df8c05f-b523-439b-908b-c4f34b22b7e9-proxy-tls\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590131 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590156 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590209 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvpn6\" (UniqueName: \"kubernetes.io/projected/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-kube-api-access-rvpn6\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590235 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222f710d-f6a2-48e7-9175-55b50f3aba30-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222f710d-f6a2-48e7-9175-55b50f3aba30-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590289 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590342 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19058e6-30ec-474e-bada-73b4981a9b65-service-ca-bundle\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d0ff97b-8da9-4156-a78b-9ebd6886313f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590402 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-webhook-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590425 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.590457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.591308 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.591547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.592299 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-auth-proxy-config\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.592678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/4df8c05f-b523-439b-908b-c4f34b22b7e9-mcc-auth-proxy-config\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.593361 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.593828 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-images\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.594913 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.595145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.595194 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-trusted-ca\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.595747 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-auth-proxy-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.596150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.596469 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.596712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-image-import-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597069 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/a8d4d608-4f73-4365-a535-71e712884eb9-proxy-tls\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597193 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-config\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597327 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/fb63abc7-f429-46c5-aa23-259063c394d0-trusted-ca\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597776 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-available-featuregates\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.597960 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-serving-ca\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.598381 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-serving-cert\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.598695 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/808fb947-228d-42c4-ba11-480348f80d8a-config\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.598922 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.613410 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-service-ca\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.613971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2efafa7a-ca64-4166-a72b-9b70b86953ad-config\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.614389 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2efafa7a-ca64-4166-a72b-9b70b86953ad-serving-cert\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.619382 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.619630 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.645706 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-trusted-ca-bundle\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.647009 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-config\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.647295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-node-pullsecrets\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.648538 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-trusted-ca-bundle\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.649492 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.650205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/396ed454-f2c7-483a-8aad-0953041099b5-config\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651526 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-57xsn\" (UniqueName: \"kubernetes.io/projected/4df8c05f-b523-439b-908b-c4f34b22b7e9-kube-api-access-57xsn\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651607 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-serving-cert\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.651768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652025 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-serving-cert\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652272 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-metrics-certs\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652431 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-serving-cert\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-default-certificate\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.652749 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9d0ff97b-8da9-4156-a78b-9ebd6886313f-metrics-tls\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653028 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653112 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-serving-cert\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653254 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-config\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653365 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-serving-cert\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-service-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653736 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/4df8c05f-b523-439b-908b-c4f34b22b7e9-proxy-tls\") pod \"machine-config-controller-84d6567774-lrwnv\" (UID: \"4df8c05f-b523-439b-908b-c4f34b22b7e9\") " pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.653882 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/5fe485a1-e14f-4c09-b5b9-f252bc42b7e8-networking-console-plugin-cert\") pod \"networking-console-plugin-85b44fc459-gdk6g\" (UID: \"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8\") " pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.654042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/a8d4d608-4f73-4365-a535-71e712884eb9-images\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.654307 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-config\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.655455 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-srv-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.655523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-machine-approver-tls\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.655544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.657424 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.657975 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-client\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658309 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-oauth-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658335 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-samples-operator-tls\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658352 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/cb0c9cf6-4966-4bd0-8933-823bc00e103c-audit-dir\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.658662 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.15864934 +0000 UTC m=+151.366971313 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.658806 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e3e30f02-3956-427a-a1f3-6e1d51f242d6-serving-cert\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e3e30f02-3956-427a-a1f3-6e1d51f242d6-config\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659571 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19058e6-30ec-474e-bada-73b4981a9b65-service-ca-bundle\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659643 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.659677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f27b4eea-081e-421a-83e9-8a5266163c53-serving-cert\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.660226 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9d0ff97b-8da9-4156-a78b-9ebd6886313f-trusted-ca\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.660846 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-config\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661450 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661566 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-etcd-ca\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661801 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/ac3b56d0-256f-40f8-b2ff-2271f82ff750-config\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-trusted-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661929 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.661997 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.662598 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.662817 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-metrics-tls\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.663762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f27b4eea-081e-421a-83e9-8a5266163c53-service-ca-bundle\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.663880 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/808fb947-228d-42c4-ba11-480348f80d8a-machine-api-operator-tls\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.664785 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.664892 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665359 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rpkjz\" (UniqueName: \"kubernetes.io/projected/e3e30f02-3956-427a-a1f3-6e1d51f242d6-kube-api-access-rpkjz\") pod \"openshift-apiserver-operator-796bbdcf4f-xtwx5\" (UID: \"e3e30f02-3956-427a-a1f3-6e1d51f242d6\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665771 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-serving-cert\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665777 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/396ed454-f2c7-483a-8aad-0953041099b5-serving-cert\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665839 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-etcd-client\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665957 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7f131da2-d815-48eb-b2ab-7f6df6a4039a-profile-collector-cert\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.665987 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/d19058e6-30ec-474e-bada-73b4981a9b65-stats-auth\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.666426 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/fb63abc7-f429-46c5-aa23-259063c394d0-image-registry-operator-tls\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.666544 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/cb0c9cf6-4966-4bd0-8933-823bc00e103c-encryption-config\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667524 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwl4m\" (UniqueName: \"kubernetes.io/projected/808fb947-228d-42c4-ba11-480348f80d8a-kube-api-access-lwl4m\") pod \"machine-api-operator-5694c8668f-hhz9f\" (UID: \"808fb947-228d-42c4-ba11-480348f80d8a\") " pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.667870 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6wdq5\" (UniqueName: \"kubernetes.io/projected/7f131da2-d815-48eb-b2ab-7f6df6a4039a-kube-api-access-6wdq5\") pod \"catalog-operator-68c6474976-hqvrw\" (UID: \"7f131da2-d815-48eb-b2ab-7f6df6a4039a\") " pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.668048 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/b8859d17-62ea-47b3-ac63-537e69ec9f90-console-oauth-config\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.668916 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.684935 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2wmnh\" (UniqueName: \"kubernetes.io/projected/ac3b56d0-256f-40f8-b2ff-2271f82ff750-kube-api-access-2wmnh\") pod \"etcd-operator-b45778765-5fj5p\" (UID: \"ac3b56d0-256f-40f8-b2ff-2271f82ff750\") " pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.690929 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.691104 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.191064512 +0000 UTC m=+151.399386485 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691205 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tdkh9\" (UniqueName: \"kubernetes.io/projected/b07c5d50-bb91-412d-b86a-3d736a16a81d-kube-api-access-tdkh9\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691271 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-socket-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222f710d-f6a2-48e7-9175-55b50f3aba30-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691363 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222f710d-f6a2-48e7-9175-55b50f3aba30-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691395 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691429 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-webhook-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691453 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691477 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-cabundle\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222f710d-f6a2-48e7-9175-55b50f3aba30-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691541 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eca1f8da-59f2-404e-a5e0-dbe1a191b885-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691569 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e1eba244-7c59-4933-ad4c-5dccc8fdc854-tmpfs\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691625 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2kwbw\" (UniqueName: \"kubernetes.io/projected/29ff5711-1e81-4ed0-8acd-6124100de37d-kube-api-access-2kwbw\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691670 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db710f25-e573-414c-9129-0dfa945d0b71-metrics-tls\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmvj\" (UniqueName: \"kubernetes.io/projected/1bb3a268-d628-4c34-b9ca-38d43d82bf86-kube-api-access-7hmvj\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691728 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-apiservice-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rlqg\" (UniqueName: \"kubernetes.io/projected/8428545d-e40d-4259-b579-ce7bff401888-kube-api-access-7rlqg\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691782 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-mountpoint-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691806 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ff5711-1e81-4ed0-8acd-6124100de37d-config\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nc9nk\" (UniqueName: \"kubernetes.io/projected/876f0761-c4c3-42f7-81f8-9a26071a7676-kube-api-access-nc9nk\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691901 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f51665c-048e-4625-846b-872a367664e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-srv-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691991 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-plugins-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zcnj7\" (UniqueName: \"kubernetes.io/projected/eca1f8da-59f2-404e-a5e0-dbe1a191b885-kube-api-access-zcnj7\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b07c5d50-bb91-412d-b86a-3d736a16a81d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-registration-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692137 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db710f25-e573-414c-9129-0dfa945d0b71-config-volume\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692164 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-csi-data-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-certs\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692218 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ff5711-1e81-4ed0-8acd-6124100de37d-serving-cert\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkzb8\" (UniqueName: \"kubernetes.io/projected/e1eba244-7c59-4933-ad4c-5dccc8fdc854-kube-api-access-fkzb8\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692277 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bbqpd\" (UniqueName: \"kubernetes.io/projected/6023e844-87d6-4f4d-bf86-a685b937cda5-kube-api-access-bbqpd\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6023e844-87d6-4f4d-bf86-a685b937cda5-cert\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692316 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vsjkl\" (UniqueName: \"kubernetes.io/projected/db710f25-e573-414c-9129-0dfa945d0b71-kube-api-access-vsjkl\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692355 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dpfs4\" (UniqueName: \"kubernetes.io/projected/38cb64e1-bd23-43eb-9eae-7c05f040640b-kube-api-access-dpfs4\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692397 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-node-bootstrap-token\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692421 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nfhjr\" (UniqueName: \"kubernetes.io/projected/3f51665c-048e-4625-846b-872a367664e5-kube-api-access-nfhjr\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692444 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-key\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692616 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.692966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-cabundle\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.693662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/222f710d-f6a2-48e7-9175-55b50f3aba30-config\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.694386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-registration-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.694641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/e1eba244-7c59-4933-ad4c-5dccc8fdc854-tmpfs\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695348 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/222f710d-f6a2-48e7-9175-55b50f3aba30-serving-cert\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695426 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-mountpoint-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695573 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/db710f25-e573-414c-9129-0dfa945d0b71-config-volume\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.695724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-csi-data-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.696638 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.196621377 +0000 UTC m=+151.404943420 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.691542 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-socket-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.698236 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/1bb3a268-d628-4c34-b9ca-38d43d82bf86-plugins-dir\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.702795 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-certs\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.704095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/b07c5d50-bb91-412d-b86a-3d736a16a81d-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.704095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-webhook-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.704703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-profile-collector-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.705618 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29ff5711-1e81-4ed0-8acd-6124100de37d-config\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.717744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/db710f25-e573-414c-9129-0dfa945d0b71-metrics-tls\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.720602 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/6023e844-87d6-4f4d-bf86-a685b937cda5-cert\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.720771 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/3f51665c-048e-4625-846b-872a367664e5-package-server-manager-serving-cert\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.725923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-gkllc\" (UniqueName: \"kubernetes.io/projected/b8859d17-62ea-47b3-ac63-537e69ec9f90-kube-api-access-gkllc\") pod \"console-f9d7485db-75nfb\" (UID: \"b8859d17-62ea-47b3-ac63-537e69ec9f90\") " pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.769627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6dqmj\" (UniqueName: \"kubernetes.io/projected/2efafa7a-ca64-4166-a72b-9b70b86953ad-kube-api-access-6dqmj\") pod \"kube-storage-version-migrator-operator-b67b599dd-mljkv\" (UID: \"2efafa7a-ca64-4166-a72b-9b70b86953ad\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.778773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.787334 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpntm\" (UniqueName: \"kubernetes.io/projected/fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a-kube-api-access-tpntm\") pod \"cluster-samples-operator-665b6dd947-6s2qz\" (UID: \"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.793289 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.793815 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.293801561 +0000 UTC m=+151.502123534 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.796369 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/e1eba244-7c59-4933-ad4c-5dccc8fdc854-apiservice-cert\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.796592 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/38cb64e1-bd23-43eb-9eae-7c05f040640b-node-bootstrap-token\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.797927 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/29ff5711-1e81-4ed0-8acd-6124100de37d-serving-cert\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.798413 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/876f0761-c4c3-42f7-81f8-9a26071a7676-signing-key\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.798453 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"collect-profiles-29481780-smks9\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.802277 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/8428545d-e40d-4259-b579-ce7bff401888-srv-cert\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.802694 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/eca1f8da-59f2-404e-a5e0-dbe1a191b885-webhook-certs\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.802793 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbd48\" (UniqueName: \"kubernetes.io/projected/cf2d94b1-aa78-4a9d-8e32-232f92ec8988-kube-api-access-qbd48\") pod \"migrator-59844c95c7-rlw62\" (UID: \"cf2d94b1-aa78-4a9d-8e32-232f92ec8988\") " pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.824590 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2gzxg\" (UniqueName: \"kubernetes.io/projected/cb0c9cf6-4966-4bd0-8933-823bc00e103c-kube-api-access-2gzxg\") pod \"apiserver-76f77b778f-twkw7\" (UID: \"cb0c9cf6-4966-4bd0-8933-823bc00e103c\") " pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.835781 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.843918 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-bound-sa-token\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.857771 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.866920 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.894820 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.895464 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.395448496 +0000 UTC m=+151.603770469 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.896990 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.911749 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.913239 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cghgt\" (UniqueName: \"kubernetes.io/projected/1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1-kube-api-access-cghgt\") pod \"machine-approver-56656f9798-5d4sw\" (UID: \"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1\") " pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.914376 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-58svc\" (UniqueName: \"kubernetes.io/projected/a8d4d608-4f73-4365-a535-71e712884eb9-kube-api-access-58svc\") pod \"machine-config-operator-74547568cd-9lr6k\" (UID: \"a8d4d608-4f73-4365-a535-71e712884eb9\") " pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.924362 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdplr\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-kube-api-access-qdplr\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.924647 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.927884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-75fg8\" (UniqueName: \"kubernetes.io/projected/d19058e6-30ec-474e-bada-73b4981a9b65-kube-api-access-75fg8\") pod \"router-default-5444994796-nxchh\" (UID: \"d19058e6-30ec-474e-bada-73b4981a9b65\") " pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.930263 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.941697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.951782 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.957758 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.969943 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w5m4f\" (UniqueName: \"kubernetes.io/projected/08bc2ba3-3f1f-40df-bf3d-1d5ed634945b-kube-api-access-w5m4f\") pod \"console-operator-58897d9998-vc6c2\" (UID: \"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b\") " pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.974234 4725 generic.go:334] "Generic (PLEG): container finished" podID="2216efbd-f6b4-4579-a94a-18c5177df641" containerID="a2f8507fc61c358dce5dbe25990d21561714c310b806b6e3d18b1c5aa921714c" exitCode=0 Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.974271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" event={"ID":"2216efbd-f6b4-4579-a94a-18c5177df641","Type":"ContainerDied","Data":"a2f8507fc61c358dce5dbe25990d21561714c310b806b6e3d18b1c5aa921714c"} Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.978059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c3dff36b-2e27-4c6b-bee4-19cd58833ea7-kube-api-access\") pod \"kube-controller-manager-operator-78b949d7b-gs8nk\" (UID: \"c3dff36b-2e27-4c6b-bee4-19cd58833ea7\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:02 crc kubenswrapper[4725]: I0120 11:07:02.996415 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:02 crc kubenswrapper[4725]: E0120 11:07:02.997045 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.497023189 +0000 UTC m=+151.705345152 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.015178 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.020323 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.025612 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.098535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.105042 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.605022924 +0000 UTC m=+151.813344907 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.124446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rh5wc\" (UniqueName: \"kubernetes.io/projected/6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb-kube-api-access-rh5wc\") pod \"dns-operator-744455d44c-g28q4\" (UID: \"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb\") " pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.131040 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-frvfh\" (UniqueName: \"kubernetes.io/projected/f27b4eea-081e-421a-83e9-8a5266163c53-kube-api-access-frvfh\") pod \"authentication-operator-69f744f599-5fgr9\" (UID: \"f27b4eea-081e-421a-83e9-8a5266163c53\") " pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.136499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"route-controller-manager-6576b87f9c-lwhzw\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.138614 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"controller-manager-879f6c89f-r5qmp\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.141225 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c4kqz\" (UniqueName: \"kubernetes.io/projected/6c5d8a1b-5c54-4877-8739-a83ab530197d-kube-api-access-c4kqz\") pod \"downloads-7954f5f757-2hmdd\" (UID: \"6c5d8a1b-5c54-4877-8739-a83ab530197d\") " pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.142456 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.150448 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.157680 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9t9ww\" (UniqueName: \"kubernetes.io/projected/396ed454-f2c7-483a-8aad-0953041099b5-kube-api-access-9t9ww\") pod \"openshift-controller-manager-operator-756b6f6bc6-kshvw\" (UID: \"396ed454-f2c7-483a-8aad-0953041099b5\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.167501 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/fb63abc7-f429-46c5-aa23-259063c394d0-bound-sa-token\") pod \"cluster-image-registry-operator-dc59b4c8b-mscpw\" (UID: \"fb63abc7-f429-46c5-aa23-259063c394d0\") " pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.221111 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.221509 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.221541 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.224034 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.724002336 +0000 UTC m=+151.932324309 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.224205 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.225141 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.225597 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.725581165 +0000 UTC m=+151.933903138 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.240612 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tdkh9\" (UniqueName: \"kubernetes.io/projected/b07c5d50-bb91-412d-b86a-3d736a16a81d-kube-api-access-tdkh9\") pod \"control-plane-machine-set-operator-78cbb6b69f-sh5db\" (UID: \"b07c5d50-bb91-412d-b86a-3d736a16a81d\") " pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.242869 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/293d5f2d-38b8-49ad-b7cc-eaf6ea931e59-kube-api-access\") pod \"kube-apiserver-operator-766d6c64bb-pnkqn\" (UID: \"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.244019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"oauth-openshift-558db77b4-lhx4z\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.247828 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.248634 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dkqvr\" (UniqueName: \"kubernetes.io/projected/9d0ff97b-8da9-4156-a78b-9ebd6886313f-kube-api-access-dkqvr\") pod \"ingress-operator-5b745b69d9-2dsbj\" (UID: \"9d0ff97b-8da9-4156-a78b-9ebd6886313f\") " pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.249417 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.249874 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"marketplace-operator-79b997595-tgvmj\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.250109 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.277258 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.282541 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvpn6\" (UniqueName: \"kubernetes.io/projected/1f8986ee-ae07-4ffe-89f2-c73eca4d3465-kube-api-access-rvpn6\") pod \"openshift-config-operator-7777fb866f-cmnx5\" (UID: \"1f8986ee-ae07-4ffe-89f2-c73eca4d3465\") " pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.295253 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/222f710d-f6a2-48e7-9175-55b50f3aba30-kube-api-access\") pod \"openshift-kube-scheduler-operator-5fdd9b5758-wrjzq\" (UID: \"222f710d-f6a2-48e7-9175-55b50f3aba30\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.316195 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.329854 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.330405 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.83038102 +0000 UTC m=+152.038702983 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.333629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bbqpd\" (UniqueName: \"kubernetes.io/projected/6023e844-87d6-4f4d-bf86-a685b937cda5-kube-api-access-bbqpd\") pod \"ingress-canary-4s7gv\" (UID: \"6023e844-87d6-4f4d-bf86-a685b937cda5\") " pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.340952 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nfhjr\" (UniqueName: \"kubernetes.io/projected/3f51665c-048e-4625-846b-872a367664e5-kube-api-access-nfhjr\") pod \"package-server-manager-789f6589d5-6hcj8\" (UID: \"3f51665c-048e-4625-846b-872a367664e5\") " pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.341554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2kwbw\" (UniqueName: \"kubernetes.io/projected/29ff5711-1e81-4ed0-8acd-6124100de37d-kube-api-access-2kwbw\") pod \"service-ca-operator-777779d784-6cxkl\" (UID: \"29ff5711-1e81-4ed0-8acd-6124100de37d\") " pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.351427 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.371738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vsjkl\" (UniqueName: \"kubernetes.io/projected/db710f25-e573-414c-9129-0dfa945d0b71-kube-api-access-vsjkl\") pod \"dns-default-x85nm\" (UID: \"db710f25-e573-414c-9129-0dfa945d0b71\") " pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.371987 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-4s7gv" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.372301 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmvj\" (UniqueName: \"kubernetes.io/projected/1bb3a268-d628-4c34-b9ca-38d43d82bf86-kube-api-access-7hmvj\") pod \"csi-hostpathplugin-9vt8w\" (UID: \"1bb3a268-d628-4c34-b9ca-38d43d82bf86\") " pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.406178 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rlqg\" (UniqueName: \"kubernetes.io/projected/8428545d-e40d-4259-b579-ce7bff401888-kube-api-access-7rlqg\") pod \"olm-operator-6b444d44fb-tvh28\" (UID: \"8428545d-e40d-4259-b579-ce7bff401888\") " pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.407395 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.417345 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.427782 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dpfs4\" (UniqueName: \"kubernetes.io/projected/38cb64e1-bd23-43eb-9eae-7c05f040640b-kube-api-access-dpfs4\") pod \"machine-config-server-kkxct\" (UID: \"38cb64e1-bd23-43eb-9eae-7c05f040640b\") " pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.431735 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.432048 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:03.932035404 +0000 UTC m=+152.140357377 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.441097 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkzb8\" (UniqueName: \"kubernetes.io/projected/e1eba244-7c59-4933-ad4c-5dccc8fdc854-kube-api-access-fkzb8\") pod \"packageserver-d55dfcdfc-d7t4z\" (UID: \"e1eba244-7c59-4933-ad4c-5dccc8fdc854\") " pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.459728 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.469799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nc9nk\" (UniqueName: \"kubernetes.io/projected/876f0761-c4c3-42f7-81f8-9a26071a7676-kube-api-access-nc9nk\") pod \"service-ca-9c57cc56f-psvt7\" (UID: \"876f0761-c4c3-42f7-81f8-9a26071a7676\") " pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.504730 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.507946 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zcnj7\" (UniqueName: \"kubernetes.io/projected/eca1f8da-59f2-404e-a5e0-dbe1a191b885-kube-api-access-zcnj7\") pod \"multus-admission-controller-857f4d67dd-7j2sn\" (UID: \"eca1f8da-59f2-404e-a5e0-dbe1a191b885\") " pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.532683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.533029 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.033014278 +0000 UTC m=+152.241336251 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.563751 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.570201 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.580772 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.586907 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.595324 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.618639 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.619366 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.634295 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.634695 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.134678524 +0000 UTC m=+152.343000497 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.680599 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-kkxct" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.681527 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.682073 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.784269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.784734 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.284713814 +0000 UTC m=+152.493035787 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.885563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.885823 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.385813472 +0000 UTC m=+152.594135445 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:03 crc kubenswrapper[4725]: I0120 11:07:03.986789 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:03 crc kubenswrapper[4725]: E0120 11:07:03.987302 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.487280362 +0000 UTC m=+152.695602345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.125911 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.126622 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.626609945 +0000 UTC m=+152.834931918 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.232246 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.232937 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.732922597 +0000 UTC m=+152.941244570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.266234 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"9fb93fadf612c16a3fafc9a8b21d7b94afecd42163dbbdb1a7d80ae2d8e0f73c"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.267414 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nxchh" event={"ID":"d19058e6-30ec-474e-bada-73b4981a9b65","Type":"ContainerStarted","Data":"0c38d4674bf7c1beaea3cfdb53f3b8819c62e7ae48d2467ce6b5c8f62cb48fc3"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.269135 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" event={"ID":"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1","Type":"ContainerStarted","Data":"70ef496b54ee860579e36d9d44431303cfe66f2365d9ab45098f33470f21f177"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.294736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"02080eac58544da25b823b4ef631a4458d792115e11928eb9f6dcce5008672f0"} Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.360742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.361053 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.861037567 +0000 UTC m=+153.069359540 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: W0120 11:07:04.431514 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod38cb64e1_bd23_43eb_9eae_7c05f040640b.slice/crio-320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d WatchSource:0}: Error finding container 320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d: Status 404 returned error can't find the container with id 320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.461554 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.461849 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:04.961834024 +0000 UTC m=+153.170155997 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.574652 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.575334 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.075321823 +0000 UTC m=+153.283643796 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.675972 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.676410 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.17639407 +0000 UTC m=+153.384716033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.801827 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.802239 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.302219087 +0000 UTC m=+153.510541090 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.808707 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5"] Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.902995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.903331 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.403291384 +0000 UTC m=+153.611613357 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:04 crc kubenswrapper[4725]: I0120 11:07:04.903450 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:04 crc kubenswrapper[4725]: E0120 11:07:04.904177 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.404166861 +0000 UTC m=+153.612488834 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.004380 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.004723 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.504700402 +0000 UTC m=+153.713022375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.110257 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.110633 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.610616761 +0000 UTC m=+153.818938734 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.286562 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.286779 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.786748414 +0000 UTC m=+153.995070387 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.287448 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.289355 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.788685825 +0000 UTC m=+153.997007798 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.300185 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" event={"ID":"2216efbd-f6b4-4579-a94a-18c5177df641","Type":"ContainerStarted","Data":"ea3e31ced5d335052e2b41c8aeaafdb835975b5e6cd58067d45fc0c387cc3f26"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.302700 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-xd92c" event={"ID":"3b6479f0-333b-4a96-9adf-2099afdc2447","Type":"ContainerStarted","Data":"191a80d1f7d8bfa4554dcb5899e3b714f5cbd9f67af9d4d632c67d8927e8f2ea"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.303489 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.314044 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-5444994796-nxchh" event={"ID":"d19058e6-30ec-474e-bada-73b4981a9b65","Type":"ContainerStarted","Data":"43c681a5995d3854b44911ef1c1d6ce4a7c57dbe4132c1c823f912e6e2e80735"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.317071 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkxct" event={"ID":"38cb64e1-bd23-43eb-9eae-7c05f040640b","Type":"ContainerStarted","Data":"320aef01031df0ea64dce820d35f075a1ba8633929cc34387075e5df6a12fb1d"} Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.392784 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" podStartSLOduration=129.392767097 podStartE2EDuration="2m9.392767097s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:05.392528759 +0000 UTC m=+153.600850732" watchObservedRunningTime="2026-01-20 11:07:05.392767097 +0000 UTC m=+153.601089070" Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.393143 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.393195 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.89318271 +0000 UTC m=+154.101504683 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.395585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.397560 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:05.897550068 +0000 UTC m=+154.105872031 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.526211 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.526910 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.026894136 +0000 UTC m=+154.235216099 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.628783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.629778 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.129733618 +0000 UTC m=+154.338055591 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.730240 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.730458 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.230430153 +0000 UTC m=+154.438752126 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.730554 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.730942 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.230924539 +0000 UTC m=+154.439246512 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.840772 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.840959 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.340934168 +0000 UTC m=+154.549256141 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.841114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.841447 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.341435364 +0000 UTC m=+154.549757337 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:05 crc kubenswrapper[4725]: I0120 11:07:05.986189 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:05 crc kubenswrapper[4725]: E0120 11:07:05.986976 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.486958882 +0000 UTC m=+154.695280855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.087768 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.088207 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.588185544 +0000 UTC m=+154.796507517 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.190617 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.190785 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.690747658 +0000 UTC m=+154.899069641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.191197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.191551 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.691540863 +0000 UTC m=+154.899862836 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.222365 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.262693 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:06 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:06 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:06 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.262751 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.295390 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.295871 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.795851681 +0000 UTC m=+155.004173654 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.322180 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" event={"ID":"e3e30f02-3956-427a-a1f3-6e1d51f242d6","Type":"ContainerStarted","Data":"7475b0a180909d8e7e2578a99d6ac8c3f674e276d1589c0305e9d5b357a14cd4"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.322239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" event={"ID":"e3e30f02-3956-427a-a1f3-6e1d51f242d6","Type":"ContainerStarted","Data":"b91537ee475d833a1b40b9c66e75b163eebb41b5cddd9fca919949159ee9b071"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.331105 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" event={"ID":"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1","Type":"ContainerStarted","Data":"c21910ab12f8c87cbb3174064be9a0a13864273587a192c1502a956337a668b2"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.331143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" event={"ID":"1fcac0f7-a1c0-4c1b-bc5c-e5d3dafe3ff1","Type":"ContainerStarted","Data":"c544212299330dddbc7a70f8c9e56dbce0bb5f2b4da38586f71d3872e1b9b26a"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.336554 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-55646444c4-trplf" event={"ID":"9d751cbb-f2e2-430d-9754-c882a5e924a5","Type":"ContainerStarted","Data":"b1a90f8b40736ec87f3ca1352b03efea881688a553f26527d8ed8c7258d2cac0"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.339146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-kkxct" event={"ID":"38cb64e1-bd23-43eb-9eae-7c05f040640b","Type":"ContainerStarted","Data":"9f3a4019b84a995abbc5b8d13b8adbbe9d6934baf1034e588efc3380695c2846"} Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.342399 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-5444994796-nxchh" podStartSLOduration=132.342380009 podStartE2EDuration="2m12.342380009s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:05.440650647 +0000 UTC m=+153.648972630" watchObservedRunningTime="2026-01-20 11:07:06.342380009 +0000 UTC m=+154.550701982" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.344713 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-796bbdcf4f-xtwx5" podStartSLOduration=133.344701962 podStartE2EDuration="2m13.344701962s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:06.343098522 +0000 UTC m=+154.551420495" watchObservedRunningTime="2026-01-20 11:07:06.344701962 +0000 UTC m=+154.553023925" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.382200 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-kkxct" podStartSLOduration=7.382176744 podStartE2EDuration="7.382176744s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:06.363456113 +0000 UTC m=+154.571778086" watchObservedRunningTime="2026-01-20 11:07:06.382176744 +0000 UTC m=+154.590498717" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.397945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.398284 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:06.898269741 +0000 UTC m=+155.106591714 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.499661 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.501395 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.001376672 +0000 UTC m=+155.209698655 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.585333 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.585627 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.601253 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.601602 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.101590751 +0000 UTC m=+155.309912724 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.616919 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-56656f9798-5d4sw" podStartSLOduration=133.616896194 podStartE2EDuration="2m13.616896194s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:06.403601009 +0000 UTC m=+154.611922992" watchObservedRunningTime="2026-01-20 11:07:06.616896194 +0000 UTC m=+154.825218167" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.619795 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-b45778765-5fj5p"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.657752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-76f77b778f-twkw7"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.663760 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv"] Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.697355 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2efafa7a_ca64_4166_a72b_9b70b86953ad.slice/crio-5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e WatchSource:0}: Error finding container 5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e: Status 404 returned error can't find the container with id 5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.705808 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.706450 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.206434267 +0000 UTC m=+155.414756240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.710198 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.714140 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw"] Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.726471 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podfb63abc7_f429_46c5_aa23_259063c394d0.slice/crio-0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4 WatchSource:0}: Error finding container 0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4: Status 404 returned error can't find the container with id 0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4 Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.809288 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.809630 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.309618371 +0000 UTC m=+155.517940344 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.837324 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.841742 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.844855 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.851655 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.853315 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-5694c8668f-hhz9f"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.866943 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-f9d7485db-75nfb"] Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.880125 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda8d4d608_4f73_4365_a535_71e712884eb9.slice/crio-345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c WatchSource:0}: Error finding container 345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c: Status 404 returned error can't find the container with id 345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.882360 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.901020 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62"] Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.910007 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.910168 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.410154051 +0000 UTC m=+155.618476014 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: I0120 11:07:06.910635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:06 crc kubenswrapper[4725]: E0120 11:07:06.911023 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.410997977 +0000 UTC m=+155.619319950 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:06 crc kubenswrapper[4725]: W0120 11:07:06.918494 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb8859d17_62ea_47b3_ac63_537e69ec9f90.slice/crio-197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b WatchSource:0}: Error finding container 197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b: Status 404 returned error can't find the container with id 197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.020725 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.021264 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.521243593 +0000 UTC m=+155.729565566 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.021369 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.021670 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.521660826 +0000 UTC m=+155.729982799 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.140583 4725 csr.go:261] certificate signing request csr-nhl29 is approved, waiting to be issued Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.141006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.141419 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.641404333 +0000 UTC m=+155.849726306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.149030 4725 csr.go:257] certificate signing request csr-nhl29 is issued Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.168105 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8"] Jan 20 11:07:07 crc kubenswrapper[4725]: W0120 11:07:07.190347 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f51665c_048e_4625_846b_872a367664e5.slice/crio-35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e WatchSource:0}: Error finding container 35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e: Status 404 returned error can't find the container with id 35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.243016 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.243390 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.743379417 +0000 UTC m=+155.951701390 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.248376 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:07 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:07 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:07 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.248435 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.316954 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.336057 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-4s7gv"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.342560 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-9c57cc56f-psvt7"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.344633 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.344920 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.844902889 +0000 UTC m=+156.053224862 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.349641 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.349936 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" event={"ID":"3f51665c-048e-4625-846b-872a367664e5","Type":"ContainerStarted","Data":"35099c354d9808f9453b92ab657eafedfcc7226690f0f8bb2d8ef778a9f78b4e"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.351874 4725 generic.go:334] "Generic (PLEG): container finished" podID="cb0c9cf6-4966-4bd0-8933-823bc00e103c" containerID="2e68bb122f901422bd534d65f561bdbcb16452da8ef99a08675bb394f96b3e43" exitCode=0 Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.351918 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerDied","Data":"2e68bb122f901422bd534d65f561bdbcb16452da8ef99a08675bb394f96b3e43"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.351933 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerStarted","Data":"2379ced5ce84665a02e86767ee98a7419d2fff445562a07319f9b750453c3096"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.356883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" event={"ID":"fb63abc7-f429-46c5-aa23-259063c394d0","Type":"ContainerStarted","Data":"9f7564fd9545e487eed6bf5f4a45ad8d471c4d9f83c4d5be7e9e772823435ecb"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.356913 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" event={"ID":"fb63abc7-f429-46c5-aa23-259063c394d0","Type":"ContainerStarted","Data":"0bd1527cc290d8dc83320ddde427bee394477a0d5ad79dbf02bbaa39c6bfdaf4"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.362611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" event={"ID":"2efafa7a-ca64-4166-a72b-9b70b86953ad","Type":"ContainerStarted","Data":"f30daa4ded68e22c78405d7c86aeaa709a9dcee3fc1fa0251486cab412425528"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.362648 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" event={"ID":"2efafa7a-ca64-4166-a72b-9b70b86953ad","Type":"ContainerStarted","Data":"5602ff652bde473e5be3a3e166d41f41197b0f89ecb745b21765912b2e42e73e"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.363779 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.376047 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-744455d44c-g28q4"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.377769 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.382541 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" event={"ID":"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a","Type":"ContainerStarted","Data":"e7349390a4c35e0628a38b9d3d64db341215b6a1e71ad8e3c1a4e13f7b5153c5"} Jan 20 11:07:07 crc kubenswrapper[4725]: W0120 11:07:07.382617 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podeb4612ff_dcf7_4e19_af27_fb8b3b54ce39.slice/crio-3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094 WatchSource:0}: Error finding container 3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094: Status 404 returned error can't find the container with id 3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094 Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.418959 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-58897d9998-vc6c2"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.424258 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-x85nm"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.429451 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.438881 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-69f744f599-5fgr9"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.444042 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" event={"ID":"a8d4d608-4f73-4365-a535-71e712884eb9","Type":"ContainerStarted","Data":"22642b0267703d0f0d4a746a0f03d271c2df67abeada67795f526bdde0045fd5"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.444116 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" event={"ID":"a8d4d608-4f73-4365-a535-71e712884eb9","Type":"ContainerStarted","Data":"345af4856bb883808f001f9c1d32effc6001ae28669bd0f91b80cd216c970b1c"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.445986 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.476832 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:07.976804708 +0000 UTC m=+156.185126691 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.482209 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.488360 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.493570 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.493611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-75nfb" event={"ID":"b8859d17-62ea-47b3-ac63-537e69ec9f90","Type":"ContainerStarted","Data":"c331766254a52bfce5ebfa9fcd1396c4a0f89ca82a69986a6b164641bcc92065"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.493633 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-f9d7485db-75nfb" event={"ID":"b8859d17-62ea-47b3-ac63-537e69ec9f90","Type":"ContainerStarted","Data":"197cadc6e223e4cc9bea109568305559cc380b11622cebe1a21d2825bb4a630b"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.502697 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.514544 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-7954f5f757-2hmdd"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.522184 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerStarted","Data":"e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.522239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerStarted","Data":"65a351e547318d4029df04eb1e821ccf32f46b5e2d9c44ec151c7be7e639c1ca"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.527481 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-9vt8w"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.530159 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.531619 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" event={"ID":"cf2d94b1-aa78-4a9d-8e32-232f92ec8988","Type":"ContainerStarted","Data":"d8bc74a2607b75eee22bf56295877481d8bdd99f60328b27f4fb6dc61d8b7716"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.531648 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" event={"ID":"cf2d94b1-aa78-4a9d-8e32-232f92ec8988","Type":"ContainerStarted","Data":"6d418525dbf979420269910e5a85f8365d7d1f3df290bb8a38ef200cbacfa9bf"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.534929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" event={"ID":"808fb947-228d-42c4-ba11-480348f80d8a","Type":"ContainerStarted","Data":"dba556c6bd771c1ed947e4e8bf41bbc3e5cf61149514ef85e454d5501a39fe07"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.534970 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" event={"ID":"808fb947-228d-42c4-ba11-480348f80d8a","Type":"ContainerStarted","Data":"17687c455efaeae5551e6c06d6262cc353e5526789a594cbcfef191cf08090c4"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.537328 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-dc59b4c8b-mscpw" podStartSLOduration=133.537316185 podStartE2EDuration="2m13.537316185s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.43946308 +0000 UTC m=+155.647785053" watchObservedRunningTime="2026-01-20 11:07:07.537316185 +0000 UTC m=+155.745638158" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.548888 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" event={"ID":"7f131da2-d815-48eb-b2ab-7f6df6a4039a","Type":"ContainerStarted","Data":"8d5580409dec8f34b75a3bbe4c60893b5001b4b4a1c9b037046003e5f75a7326"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.548941 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" event={"ID":"7f131da2-d815-48eb-b2ab-7f6df6a4039a","Type":"ContainerStarted","Data":"baf54609128ace7d70acf0d367555b43e502c76e9fc46dd37480fda5ebc664d4"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.550027 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.550359 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.552089 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-b67b599dd-mljkv" podStartSLOduration=132.55206562 podStartE2EDuration="2m12.55206562s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.488290569 +0000 UTC m=+155.696612542" watchObservedRunningTime="2026-01-20 11:07:07.55206562 +0000 UTC m=+155.760387593" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.554185 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-f9d7485db-75nfb" podStartSLOduration=133.554175557 podStartE2EDuration="2m13.554175557s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.520272048 +0000 UTC m=+155.728594041" watchObservedRunningTime="2026-01-20 11:07:07.554175557 +0000 UTC m=+155.762497530" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.554389 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.555937 4725 patch_prober.go:28] interesting pod/catalog-operator-68c6474976-hqvrw container/catalog-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" start-of-body= Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.555980 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" podUID="7f131da2-d815-48eb-b2ab-7f6df6a4039a" containerName="catalog-operator" probeResult="failure" output="Get \"https://10.217.0.35:8443/healthz\": dial tcp 10.217.0.35:8443: connect: connection refused" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.561159 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" podStartSLOduration=133.561138796 podStartE2EDuration="2m13.561138796s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.54161991 +0000 UTC m=+155.749941883" watchObservedRunningTime="2026-01-20 11:07:07.561138796 +0000 UTC m=+155.769460769" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.565443 4725 generic.go:334] "Generic (PLEG): container finished" podID="1f8986ee-ae07-4ffe-89f2-c73eca4d3465" containerID="2fbae3e4c5ba192e1288227633bbd0bea8731f438425a2f82de85dd88045865a" exitCode=0 Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.565531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" event={"ID":"1f8986ee-ae07-4ffe-89f2-c73eca4d3465","Type":"ContainerDied","Data":"2fbae3e4c5ba192e1288227633bbd0bea8731f438425a2f82de85dd88045865a"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.565568 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" event={"ID":"1f8986ee-ae07-4ffe-89f2-c73eca4d3465","Type":"ContainerStarted","Data":"1cca0ecc8497adce399210ce48e93b9b21075eac4a44aaa49fea4cb5f7e3ee8a"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.578297 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"2fbde62b9831aa7525ab1a824d6d69162a40c06bf17f3f1ed6515ab9b7d33004"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.578339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-85b44fc459-gdk6g" event={"ID":"5fe485a1-e14f-4c09-b5b9-f252bc42b7e8","Type":"ContainerStarted","Data":"77cf07c0bba9ef8e38147bb27882322b3a3d47058b152d80dea6d0f4917ab4c6"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.586881 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" podStartSLOduration=132.586860497 podStartE2EDuration="2m12.586860497s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.563794849 +0000 UTC m=+155.772116842" watchObservedRunningTime="2026-01-20 11:07:07.586860497 +0000 UTC m=+155.795182470" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.586984 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.588276 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.088255601 +0000 UTC m=+156.296577574 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.591663 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-857f4d67dd-7j2sn"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.598210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" event={"ID":"ac3b56d0-256f-40f8-b2ff-2271f82ff750","Type":"ContainerStarted","Data":"0f1fa3ac1364ebadff50537496435edbd43621a9de38871245a6371017182864"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.598264 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" event={"ID":"ac3b56d0-256f-40f8-b2ff-2271f82ff750","Type":"ContainerStarted","Data":"ef1c9a7ed1b9223c25f0c6ab857ca3e2041759c003968197fa76daf44b08d243"} Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.616712 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.628032 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-7bbb656c7d-8px9g" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.672402 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-b45778765-5fj5p" podStartSLOduration=133.672378324 podStartE2EDuration="2m13.672378324s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:07.669649968 +0000 UTC m=+155.877971951" watchObservedRunningTime="2026-01-20 11:07:07.672378324 +0000 UTC m=+155.880700297" Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.674470 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28"] Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.777630 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.781998 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.28198318 +0000 UTC m=+156.490305153 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.881942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.882862 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.382812319 +0000 UTC m=+156.591134292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:07 crc kubenswrapper[4725]: I0120 11:07:07.983313 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:07 crc kubenswrapper[4725]: E0120 11:07:07.983894 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.483878315 +0000 UTC m=+156.692200288 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.051155 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.085355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.085692 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.585677915 +0000 UTC m=+156.793999878 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.150903 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Certificate expiration is 2027-01-20 11:02:07 +0000 UTC, rotation deadline is 2026-11-26 21:51:11.824184064 +0000 UTC Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.150961 4725 certificate_manager.go:356] kubernetes.io/kubelet-serving: Waiting 7450h44m3.6732248s for next certificate rotation Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.186653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.186989 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.686976739 +0000 UTC m=+156.895298712 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.226264 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:08 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:08 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:08 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.226313 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.292158 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.292272 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.792249708 +0000 UTC m=+157.000571681 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.292861 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.293379 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.793368594 +0000 UTC m=+157.001690567 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.394269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.394607 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.894578065 +0000 UTC m=+157.102900038 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.394840 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.395192 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:08.895180164 +0000 UTC m=+157.103502137 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.503611 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.503831 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.003799009 +0000 UTC m=+157.212120982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.611555 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.611846 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.111835285 +0000 UTC m=+157.320157258 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.697592 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" event={"ID":"3f51665c-048e-4625-846b-872a367664e5","Type":"ContainerStarted","Data":"6c9aae534dfcaf85a01bb59882019a09dd63f3cdb8ff8a81eadda6f1b30d5c0a"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.700509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" event={"ID":"c3dff36b-2e27-4c6b-bee4-19cd58833ea7","Type":"ContainerStarted","Data":"bfb6b9f87d7807de82f889139680e6cafe692c66fe25ca54d534263dd2f4f22e"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.707473 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerStarted","Data":"3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.712327 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.712632 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.212617372 +0000 UTC m=+157.420939345 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.787796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" event={"ID":"a8d4d608-4f73-4365-a535-71e712884eb9","Type":"ContainerStarted","Data":"2f44ccaad1054e141bccb2fc2d00e1ca136ba341c2c3e5f6648bb3ca9d7659fd"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.791235 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" event={"ID":"396ed454-f2c7-483a-8aad-0953041099b5","Type":"ContainerStarted","Data":"7c8e9b4a6d96cf3000ea2cea8585188b82d89e8eeb465223f79e43e793a0e860"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.797139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" event={"ID":"f27b4eea-081e-421a-83e9-8a5266163c53","Type":"ContainerStarted","Data":"977259fa46250cfa3faaed91e90d3a012f9520c8708543df9be4c3821af4a14b"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.815474 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.817196 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.31718484 +0000 UTC m=+157.525506813 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.841246 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" event={"ID":"222f710d-f6a2-48e7-9175-55b50f3aba30","Type":"ContainerStarted","Data":"da8ad133548044f221e0607f52878ae85abe37a7302d30eb560a3905b5f05d4b"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.844796 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"146e2a40139c8580a82a96198237e6caf20d339116832d1224d6065c5d51bf27"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.846006 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerStarted","Data":"0b78375c7ed8f9916a58dd59c26f3043217b694c6d335a958edaddd11c21782a"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.846957 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"7824144afcc8a399d8ad02f47566e1f9f7e8fccfd9082edf2a275537cfa7c907"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.856560 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.857934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" event={"ID":"eca1f8da-59f2-404e-a5e0-dbe1a191b885","Type":"ContainerStarted","Data":"24b3efbd35deaf29f8ae99f73d94b4a37207439f887544f47e5d619803f53177"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.859058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4s7gv" event={"ID":"6023e844-87d6-4f4d-bf86-a685b937cda5","Type":"ContainerStarted","Data":"23efdca88d24391c79cbdc8101644526dfe796074cbe106632842089a3aea5ff"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.859100 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-4s7gv" event={"ID":"6023e844-87d6-4f4d-bf86-a685b937cda5","Type":"ContainerStarted","Data":"bec68e0cbb9ef68aa43cb22135599ea459c3058d3751e304838b8ee5856a5298"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.862286 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" event={"ID":"b07c5d50-bb91-412d-b86a-3d736a16a81d","Type":"ContainerStarted","Data":"17f031bacd1eda1c2ba5121f6412c48147956df7560c565dbac566f72b8d91d9"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.863714 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerStarted","Data":"8c9ae23bdbd75e8f49ad08210ad1b5884a445b42c455c9175cf22d7caa19bfef"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.864986 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" event={"ID":"4df8c05f-b523-439b-908b-c4f34b22b7e9","Type":"ContainerStarted","Data":"50a462970ad6d65adb263c111a72af15f6635fc334edd6ec6c733371acac627f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.866575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" event={"ID":"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a","Type":"ContainerStarted","Data":"0ad6a5b17a1ff2606b662eaa2a0e8d9edadea69bba3e967d770049369283aec3"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.866603 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" event={"ID":"fb5baaeb-c041-44ff-9cb4-e2962a0d1b5a","Type":"ContainerStarted","Data":"8667f6a3ff3b7a1116ad8912ad410ee6fb4a8a3c9575abcccafa5a4aba6df766"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.867896 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" event={"ID":"9d0ff97b-8da9-4156-a78b-9ebd6886313f","Type":"ContainerStarted","Data":"d3d2b9ac9980bdb6cc0f6489ef75a6ab145564c82979a42ac7db2801b2c88e21"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.869175 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" event={"ID":"cf2d94b1-aa78-4a9d-8e32-232f92ec8988","Type":"ContainerStarted","Data":"fbd9c7453ed542b308e811a5a43b148b28154367b997c7b9389bd85162bc19b8"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.870461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" event={"ID":"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b","Type":"ContainerStarted","Data":"9e1ef4fbb89013bf638486d8be02122f0bc36ac09c8b6e368cea4cb9dc8d23eb"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.871192 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerStarted","Data":"f39928c8d7256975b95a8abe066b49247f38d754512e9fe57502d4feea0d8501"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.872168 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" event={"ID":"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb","Type":"ContainerStarted","Data":"0ae322f03e68ee5dfe43a307a875b1e4f6979860e0505612a2338182052c17a2"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.873353 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerStarted","Data":"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.873379 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerStarted","Data":"ade77836dcd269f9c5de0b97ad651f7a735e267f67b9c6aa9acfc5f72e48f82f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.874255 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.876089 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lwhzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.876127 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.876411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" event={"ID":"876f0761-c4c3-42f7-81f8-9a26071a7676","Type":"ContainerStarted","Data":"d5639b34e08781dea22f4cadbcc373a0ec2674e0868509a628145723a268aa0f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.878244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x85nm" event={"ID":"db710f25-e573-414c-9129-0dfa945d0b71","Type":"ContainerStarted","Data":"cf7ba3a3a7274ff7821b5279e40ba6e2bd9919ddb8fe93c0e131e2e112f0358e"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.880420 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" event={"ID":"8428545d-e40d-4259-b579-ce7bff401888","Type":"ContainerStarted","Data":"1b572bd552d0092cfbb3df230d8d034e2e5ab55b33aa4f3b57fee11c4f64e6e4"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.884146 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" event={"ID":"29ff5711-1e81-4ed0-8acd-6124100de37d","Type":"ContainerStarted","Data":"19358ebd603e8195e69c6c2b23e06e1e71a1829b126088ccbc3ad70199c568ac"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.886266 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" event={"ID":"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59","Type":"ContainerStarted","Data":"7da506d1dfa708183b544bfc4756606b68f7e40ea8c138fead633a74346c076f"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.889812 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" event={"ID":"e1eba244-7c59-4933-ad4c-5dccc8fdc854","Type":"ContainerStarted","Data":"9e42315c152eaa8fbaf3a0fc31f4242fe3c5828fd4c64a1a0d048a412c00207b"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.894013 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" event={"ID":"808fb947-228d-42c4-ba11-480348f80d8a","Type":"ContainerStarted","Data":"f05222988264b316f6dffb71d4eb7816c4979708a52ff1a83b0e27db6b9aeb83"} Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.901910 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-68c6474976-hqvrw" Jan 20 11:07:08 crc kubenswrapper[4725]: I0120 11:07:08.916443 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:08 crc kubenswrapper[4725]: E0120 11:07:08.917925 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.417911316 +0000 UTC m=+157.626233289 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.017876 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-74547568cd-9lr6k" podStartSLOduration=134.017855537 podStartE2EDuration="2m14.017855537s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.016233946 +0000 UTC m=+157.224555919" watchObservedRunningTime="2026-01-20 11:07:09.017855537 +0000 UTC m=+157.226177510" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.018723 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.018978 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.518968152 +0000 UTC m=+157.727290125 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.147327 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" podStartSLOduration=135.147309579 podStartE2EDuration="2m15.147309579s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.127818514 +0000 UTC m=+157.336140497" watchObservedRunningTime="2026-01-20 11:07:09.147309579 +0000 UTC m=+157.355631552" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.149063 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-5694c8668f-hhz9f" podStartSLOduration=134.149056963 podStartE2EDuration="2m14.149056963s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.146360628 +0000 UTC m=+157.354682601" watchObservedRunningTime="2026-01-20 11:07:09.149056963 +0000 UTC m=+157.357378936" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.192333 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.192978 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.692954968 +0000 UTC m=+157.901276941 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.216639 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-4s7gv" podStartSLOduration=10.216622354 podStartE2EDuration="10.216622354s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.199979509 +0000 UTC m=+157.408301482" watchObservedRunningTime="2026-01-20 11:07:09.216622354 +0000 UTC m=+157.424944327" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.217059 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-665b6dd947-6s2qz" podStartSLOduration=135.217050078 podStartE2EDuration="2m15.217050078s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.214522468 +0000 UTC m=+157.422844441" watchObservedRunningTime="2026-01-20 11:07:09.217050078 +0000 UTC m=+157.425372051" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.241362 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-59844c95c7-rlw62" podStartSLOduration=134.241345673 podStartE2EDuration="2m14.241345673s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.23869486 +0000 UTC m=+157.447016833" watchObservedRunningTime="2026-01-20 11:07:09.241345673 +0000 UTC m=+157.449667646" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.247347 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:09 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:09 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:09 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.247410 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.266460 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podStartSLOduration=133.266435864 podStartE2EDuration="2m13.266435864s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:09.264031599 +0000 UTC m=+157.472353572" watchObservedRunningTime="2026-01-20 11:07:09.266435864 +0000 UTC m=+157.474757837" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.294154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.294599 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.794580592 +0000 UTC m=+158.002902565 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.395420 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.395615 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.895583767 +0000 UTC m=+158.103905740 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.395747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.396125 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.896110583 +0000 UTC m=+158.104432556 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.496434 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.497690 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:09.997669666 +0000 UTC m=+158.205991639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.598606 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.599127 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.099111953 +0000 UTC m=+158.307433926 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.699282 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.699630 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.199610083 +0000 UTC m=+158.407932046 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.801048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.801458 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.301445153 +0000 UTC m=+158.509767126 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.902323 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:09 crc kubenswrapper[4725]: E0120 11:07:09.902625 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.402607343 +0000 UTC m=+158.610929326 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.916150 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"2a535dc5fe3813256d22334a0b77b08466b5880b0812562973bec061393a4d38"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.917646 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" event={"ID":"4df8c05f-b523-439b-908b-c4f34b22b7e9","Type":"ContainerStarted","Data":"af77a96bd9ba35fb3ac538e2761fa92acee4c18ee7e63ab0916e014c047aa256"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.921530 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.922215 4725 generic.go:334] "Generic (PLEG): container finished" podID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerID="e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0" exitCode=0 Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.922602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerDied","Data":"e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.922824 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.924578 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" event={"ID":"876f0761-c4c3-42f7-81f8-9a26071a7676","Type":"ContainerStarted","Data":"6ded86d27c51be355b6b1ed8bb3015d47742d646e66fff0dabb047d8e4d55497"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.925450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.926940 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x85nm" event={"ID":"db710f25-e573-414c-9129-0dfa945d0b71","Type":"ContainerStarted","Data":"acc0e8d380386dccbb93d86c4c17c9015b976635b8e2fb08ed60728195d4e9f6"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.928931 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" event={"ID":"396ed454-f2c7-483a-8aad-0953041099b5","Type":"ContainerStarted","Data":"b790973920d56a9ce46f8c4b3b7e161ff94a2029028c99f437a3f218f74faa88"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.930373 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" event={"ID":"b07c5d50-bb91-412d-b86a-3d736a16a81d","Type":"ContainerStarted","Data":"a2ed9a7c94d3a76a3edf66f57d30c78a23fa246c399b614c543f24b1735b8ce9"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.932167 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerStarted","Data":"4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.932962 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.934806 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.934850 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.935952 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" event={"ID":"cb0c9cf6-4966-4bd0-8933-823bc00e103c","Type":"ContainerStarted","Data":"9266f669098b4acf7bf846a4f35ee36aeffd9332c7441f6d5f058d68fe3c3fd5"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.938814 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" event={"ID":"29ff5711-1e81-4ed0-8acd-6124100de37d","Type":"ContainerStarted","Data":"b15b917912baf61f61ad944365802ab24535e6598f766f30e676fc72d19ffa4e"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.940859 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" event={"ID":"08bc2ba3-3f1f-40df-bf3d-1d5ed634945b","Type":"ContainerStarted","Data":"195122e431daedb3c4477b730ec22b44608aea9ea19b78430e2442a39e386352"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.941374 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.942644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" event={"ID":"1f8986ee-ae07-4ffe-89f2-c73eca4d3465","Type":"ContainerStarted","Data":"29cf57e4793fd00dc38bb4eef89cfdf01955ea5fc0076a381c6b53926b3ab853"} Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.943383 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-vc6c2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" start-of-body= Jan 20 11:07:09 crc kubenswrapper[4725]: I0120 11:07:09.943554 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" podUID="08bc2ba3-3f1f-40df-bf3d-1d5ed634945b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": dial tcp 10.217.0.29:8443: connect: connection refused" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.059757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.060853 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.560812962 +0000 UTC m=+158.769134955 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.063448 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" event={"ID":"9d0ff97b-8da9-4156-a78b-9ebd6886313f","Type":"ContainerStarted","Data":"d49029f806fcceece28215f1aecf257c556fa187a2ac2fa27c2e9c6b0548f7bc"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.072485 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.077577 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" event={"ID":"f27b4eea-081e-421a-83e9-8a5266163c53","Type":"ContainerStarted","Data":"eb550e6132dffe9232a2199f66417b13c2f3e0934104253d5a9a59db899c9260"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.096878 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.098291 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.101615 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" event={"ID":"3f51665c-048e-4625-846b-872a367664e5","Type":"ContainerStarted","Data":"8a28d3e5f9a6753eb2b00804bf186cdc01ed67f24f6ffcf21e59f0762b62548b"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.103233 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.106000 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.128525 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.133099 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerStarted","Data":"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053"} Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.133782 4725 patch_prober.go:28] interesting pod/route-controller-manager-6576b87f9c-lwhzw container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" start-of-body= Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.133814 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.8:8443/healthz\": dial tcp 10.217.0.8:8443: connect: connection refused" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.135133 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.138508 4725 patch_prober.go:28] interesting pod/controller-manager-879f6c89f-r5qmp container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.138547 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.140124 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-756b6f6bc6-kshvw" podStartSLOduration=136.140112081 podStartE2EDuration="2m16.140112081s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.135989982 +0000 UTC m=+158.344311965" watchObservedRunningTime="2026-01-20 11:07:10.140112081 +0000 UTC m=+158.348434054" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.211216 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.211677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.211758 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.212005 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.711983858 +0000 UTC m=+158.920305831 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.213651 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.310978 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.316898 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.816878575 +0000 UTC m=+159.025200548 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.348028 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.350021 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.357734 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" podStartSLOduration=134.357715513 podStartE2EDuration="2m14.357715513s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.34464023 +0000 UTC m=+158.552962203" watchObservedRunningTime="2026-01-20 11:07:10.357715513 +0000 UTC m=+158.566037486" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.362568 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:10 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:10 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:10 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.362936 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.367247 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.411821 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412179 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412314 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412372 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.412626 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.413521 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:10.913503222 +0000 UTC m=+159.121825185 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.415341 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.416923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.436781 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-777779d784-6cxkl" podStartSLOduration=134.436763685 podStartE2EDuration="2m14.436763685s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.392909583 +0000 UTC m=+158.601231546" watchObservedRunningTime="2026-01-20 11:07:10.436763685 +0000 UTC m=+158.645085658" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.437659 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podStartSLOduration=135.437653663 podStartE2EDuration="2m15.437653663s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.434986529 +0000 UTC m=+158.643308502" watchObservedRunningTime="2026-01-20 11:07:10.437653663 +0000 UTC m=+158.645975636" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.456291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"community-operators-8pplm\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.503477 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" podStartSLOduration=137.503455518 podStartE2EDuration="2m17.503455518s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.481459874 +0000 UTC m=+158.689781857" watchObservedRunningTime="2026-01-20 11:07:10.503455518 +0000 UTC m=+158.711777491" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.505732 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.507035 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514381 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514420 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514485 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514535 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.514569 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.515398 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.015384874 +0000 UTC m=+159.223706847 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.515866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.516183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.527479 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.553334 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-9c57cc56f-psvt7" podStartSLOduration=134.55331012 podStartE2EDuration="2m14.55331012s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.544188972 +0000 UTC m=+158.752510975" watchObservedRunningTime="2026-01-20 11:07:10.55331012 +0000 UTC m=+158.761632093" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.581138 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"certified-operators-6n4zh\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.582619 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-69f744f599-5fgr9" podStartSLOduration=137.582607644 podStartE2EDuration="2m17.582607644s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.580352563 +0000 UTC m=+158.788674546" watchObservedRunningTime="2026-01-20 11:07:10.582607644 +0000 UTC m=+158.790929617" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620419 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.620566 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.12053793 +0000 UTC m=+159.328859903 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620752 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.620814 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621393 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621444 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621481 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621501 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.621310 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.621829 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.12181649 +0000 UTC m=+159.330138463 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.622111 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.641844 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" podStartSLOduration=136.641825621 podStartE2EDuration="2m16.641825621s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.619817957 +0000 UTC m=+158.828139930" watchObservedRunningTime="2026-01-20 11:07:10.641825621 +0000 UTC m=+158.850147594" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.647016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"community-operators-vbr29\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.649606 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-78cbb6b69f-sh5db" podStartSLOduration=135.649594756 podStartE2EDuration="2m15.649594756s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.645738185 +0000 UTC m=+158.854060178" watchObservedRunningTime="2026-01-20 11:07:10.649594756 +0000 UTC m=+158.857916739" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724613 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724839 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724893 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.724936 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.725330 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.225317523 +0000 UTC m=+159.433639496 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.725665 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.725867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.726792 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" podStartSLOduration=136.72677466 podStartE2EDuration="2m16.72677466s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:10.670525006 +0000 UTC m=+158.878846969" watchObservedRunningTime="2026-01-20 11:07:10.72677466 +0000 UTC m=+158.935096633" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.729940 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.731867 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.744748 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.760186 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"certified-operators-vs4qk\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.826196 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.853428 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.855057 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.355032623 +0000 UTC m=+159.563354596 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:10 crc kubenswrapper[4725]: I0120 11:07:10.928514 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:10 crc kubenswrapper[4725]: E0120 11:07:10.928939 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.428923683 +0000 UTC m=+159.637245656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.030714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.131432 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.631408387 +0000 UTC m=+159.839730360 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.131668 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.132000 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.631989556 +0000 UTC m=+159.840311529 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.255938 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.256359 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.756347147 +0000 UTC m=+159.964669110 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.280815 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:11 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:11 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:11 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.280907 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.356910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.358482 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.858463906 +0000 UTC m=+160.066785879 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.473892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.477200 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:11.97718063 +0000 UTC m=+160.185502603 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.588638 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.589151 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.08912955 +0000 UTC m=+160.297451523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.694870 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.695673 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.195659939 +0000 UTC m=+160.403981912 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.712855 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" event={"ID":"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb","Type":"ContainerStarted","Data":"5fcb5a5f6fd4a62a750e6611ee8b9381e62e99e69b8987cfd51959ab622d7a52"} Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.800188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.800708 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.30068902 +0000 UTC m=+160.509010993 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.828127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" event={"ID":"9d0ff97b-8da9-4156-a78b-9ebd6886313f","Type":"ContainerStarted","Data":"84ad582d13d994d4f0f306690526c0413b3d214c61ca4bc111ea0ba825199abb"} Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.840551 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" event={"ID":"222f710d-f6a2-48e7-9175-55b50f3aba30","Type":"ContainerStarted","Data":"61a7ab4966091d4f69d91db412773e4eeb151873ba5ebf492021b19b86bc66dc"} Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.892129 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-5b745b69d9-2dsbj" podStartSLOduration=137.892107572 podStartE2EDuration="2m17.892107572s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:11.8885201 +0000 UTC m=+160.096842073" watchObservedRunningTime="2026-01-20 11:07:11.892107572 +0000 UTC m=+160.100429565" Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.893799 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:11 crc kubenswrapper[4725]: I0120 11:07:11.908234 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:11 crc kubenswrapper[4725]: E0120 11:07:11.908618 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.408604373 +0000 UTC m=+160.616926346 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.092983 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" event={"ID":"eca1f8da-59f2-404e-a5e0-dbe1a191b885","Type":"ContainerStarted","Data":"48fa444241a2b7476ccea56c69f3435aa4b6f39132a3a0083775d0abf0a56a37"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.093501 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.093873 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.593851183 +0000 UTC m=+160.802173156 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.120163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerStarted","Data":"6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.120241 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.130867 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.132132 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.134180 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lhx4z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.134223 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.138642 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-5fdd9b5758-wrjzq" podStartSLOduration=137.138612144 podStartE2EDuration="2m17.138612144s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.126754771 +0000 UTC m=+160.335076754" watchObservedRunningTime="2026-01-20 11:07:12.138612144 +0000 UTC m=+160.346934117" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.140746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" event={"ID":"c3dff36b-2e27-4c6b-bee4-19cd58833ea7","Type":"ContainerStarted","Data":"6072c7785a960e401b9dbe1aa849d245daee87165d43547c831fd6da21c65c14"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.262493 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.263010 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.762992616 +0000 UTC m=+160.971314589 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.263373 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.274416 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" event={"ID":"e1eba244-7c59-4933-ad4c-5dccc8fdc854","Type":"ContainerStarted","Data":"ad73e49a159a3f1e9ce914c33abe4915142f3b24a74a5d1133801772668fbe5f"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.276426 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288334 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d7t4z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288401 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podUID="e1eba244-7c59-4933-ad4c-5dccc8fdc854" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": dial tcp 10.217.0.37:5443: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288908 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:12 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:12 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:12 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.288961 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.292617 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" event={"ID":"4df8c05f-b523-439b-908b-c4f34b22b7e9","Type":"ContainerStarted","Data":"2ba53f9e92e7a0a3ccbcd8596513e2e9bab5a869b9fa262deb6dfb896e7387bb"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.308865 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.334843 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-x85nm" podStartSLOduration=13.334822691 podStartE2EDuration="13.334822691s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.333218661 +0000 UTC m=+160.541540634" watchObservedRunningTime="2026-01-20 11:07:12.334822691 +0000 UTC m=+160.543144664" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.351135 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" event={"ID":"293d5f2d-38b8-49ad-b7cc-eaf6ea931e59","Type":"ContainerStarted","Data":"8c1fce15f7bb048ecca7c9ccb244baea7c59777b5805576f1d0e641d8c3d55a6"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.378294 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.389336 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.390558 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.390853 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.390894 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.391128 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.392259 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:12.892236102 +0000 UTC m=+161.100558075 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.398540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" event={"ID":"8428545d-e40d-4259-b579-ce7bff401888","Type":"ContainerStarted","Data":"36f5f86f95b4bde71664c879d9ab5f8775595d7b1d17e21294d00517a7a63568"} Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.398571 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.412853 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.412887 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.444208 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549724 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.549779 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.553608 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.053575448 +0000 UTC m=+161.261897421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.554034 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.554422 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.610998 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.611069 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.615881 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podStartSLOduration=136.615855872 podStartE2EDuration="2m16.615855872s" podCreationTimestamp="2026-01-20 11:04:56 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.610720921 +0000 UTC m=+160.819042904" watchObservedRunningTime="2026-01-20 11:07:12.615855872 +0000 UTC m=+160.824177845" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.616774 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podStartSLOduration=139.616767411 podStartE2EDuration="2m19.616767411s" podCreationTimestamp="2026-01-20 11:04:53 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.382145083 +0000 UTC m=+160.590467066" watchObservedRunningTime="2026-01-20 11:07:12.616767411 +0000 UTC m=+160.825089374" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.622760 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.623597 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.634159 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"redhat-marketplace-c2jtp\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.635427 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.651507 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.652919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.655520 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.155492392 +0000 UTC m=+161.363814375 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.660101 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.678522 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-766d6c64bb-pnkqn" podStartSLOduration=137.678502638 podStartE2EDuration="2m17.678502638s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.662289896 +0000 UTC m=+160.870611869" watchObservedRunningTime="2026-01-20 11:07:12.678502638 +0000 UTC m=+160.886824611" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.680801 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.18078421 +0000 UTC m=+161.389106183 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.687413 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.883040 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-78b949d7b-gs8nk" podStartSLOduration=137.883013505 podStartE2EDuration="2m17.883013505s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.810149449 +0000 UTC m=+161.018471422" watchObservedRunningTime="2026-01-20 11:07:12.883013505 +0000 UTC m=+161.091335478" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.887711 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.887951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.887980 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.888004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.888205 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.388188959 +0000 UTC m=+161.596510932 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.888561 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.888584 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.896293 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.976981 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-7954f5f757-2hmdd" podStartSLOduration=138.976960608 podStartE2EDuration="2m18.976960608s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:12.900408424 +0000 UTC m=+161.108730397" watchObservedRunningTime="2026-01-20 11:07:12.976960608 +0000 UTC m=+161.185282581" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.990967 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.991014 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.991054 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.991163 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:12 crc kubenswrapper[4725]: E0120 11:07:12.991543 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:13.491526728 +0000 UTC m=+161.699848701 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.992677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:12 crc kubenswrapper[4725]: I0120 11:07:12.992912 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.039250 4725 patch_prober.go:28] interesting pod/console-f9d7485db-75nfb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.039309 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-75nfb" podUID="b8859d17-62ea-47b3-ac63-537e69ec9f90" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.046946 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.718397 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"redhat-marketplace-lxmdj\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.722609 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-84d6567774-lrwnv" podStartSLOduration=138.722591378 podStartE2EDuration="2m18.722591378s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:13.011998472 +0000 UTC m=+161.220320435" watchObservedRunningTime="2026-01-20 11:07:13.722591378 +0000 UTC m=+161.930913351" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.730282 4725 patch_prober.go:28] interesting pod/console-operator-58897d9998-vc6c2 container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.29:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.730376 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" podUID="08bc2ba3-3f1f-40df-bf3d-1d5ed634945b" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.29:8443/readyz\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.741774 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lhx4z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.741833 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": dial tcp 10.217.0.10:6443: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.742963 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743343 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Liveness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743374 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743773 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.743818 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764841 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764895 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764954 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.764966 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.769566 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:13 crc kubenswrapper[4725]: E0120 11:07:13.770066 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.770032124 +0000 UTC m=+162.978354097 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:13 crc kubenswrapper[4725]: I0120 11:07:13.949570 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:13 crc kubenswrapper[4725]: E0120 11:07:13.950194 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.450153748 +0000 UTC m=+162.658475721 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.107705 4725 kubelet.go:2526] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.061s" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108005 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108054 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108120 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-7777fb866f-cmnx5" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.108135 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.109582 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.109603 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.110242 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:14 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.110284 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111188 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-6b444d44fb-tvh28" podStartSLOduration=139.11116333 podStartE2EDuration="2m19.11116333s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:14.110891651 +0000 UTC m=+162.319213624" watchObservedRunningTime="2026-01-20 11:07:14.11116333 +0000 UTC m=+162.319485303" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111824 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111846 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.111944 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.112184 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.112219 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.112696 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.612683568 +0000 UTC m=+162.821005531 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.129150 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.183979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerStarted","Data":"01a79750127c09ea5c6dc20b661d6675fdb1d12c0c260ea3667e9b8f6125164f"} Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221158 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221690 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.221795 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.222425 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-58897d9998-vc6c2" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.223194 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.723176612 +0000 UTC m=+162.931498585 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.237315 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:14 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:14 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.237371 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.318677 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323808 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323868 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323942 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323976 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.323997 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.324023 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.324820 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.325131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.325640 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.325845 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.332695 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-x85nm" event={"ID":"db710f25-e573-414c-9129-0dfa945d0b71","Type":"ContainerStarted","Data":"1d9e566e86385a42798402a5e088ba782e33fbb9244935e33f3450af02dbca60"} Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.352824 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.852781128 +0000 UTC m=+163.061103111 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.374066 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"redhat-operators-78bg4\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.393274 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" event={"ID":"eca1f8da-59f2-404e-a5e0-dbe1a191b885","Type":"ContainerStarted","Data":"164dfe079177ee6da99408a57439b21e42108b8ebad11255499fbdf5b4386afe"} Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.421313 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" event={"ID":"6275b0b4-f9e0-4ecc-8a7a-3d702ec753eb","Type":"ContainerStarted","Data":"f1b4836b5552db8e9659bf24041c0c226e75f53c819ee8e33cb00a2edd304a13"} Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.427498 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"redhat-operators-6nxjc\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.428556 4725 patch_prober.go:28] interesting pod/marketplace-operator-79b997595-tgvmj container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.428602 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.26:8080/healthz\": dial tcp 10.217.0.26:8080: connect: connection refused" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.429705 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.430950 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:14.930931683 +0000 UTC m=+163.139253656 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.444961 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.445004 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.462755 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.510515 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.577658 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.649020 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.652206 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.152189109 +0000 UTC m=+163.360511082 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.684353 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.684864 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.184837308 +0000 UTC m=+163.393159281 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.733314 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d7t4z container/packageserver namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.733398 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podUID="e1eba244-7c59-4933-ad4c-5dccc8fdc854" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.740027 4725 patch_prober.go:28] interesting pod/packageserver-d55dfcdfc-d7t4z container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.740155 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" podUID="e1eba244-7c59-4933-ad4c-5dccc8fdc854" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.37:5443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.765990 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.889760 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.890332 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.390320936 +0000 UTC m=+163.598642909 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:14 crc kubenswrapper[4725]: I0120 11:07:14.996706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:14 crc kubenswrapper[4725]: E0120 11:07:14.997180 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.497161755 +0000 UTC m=+163.705483728 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.098592 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.099034 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.599019397 +0000 UTC m=+163.807341370 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.201504 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.201995 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.701978903 +0000 UTC m=+163.910300876 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.206830 4725 patch_prober.go:28] interesting pod/apiserver-76f77b778f-twkw7 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]log ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]etcd ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/generic-apiserver-start-informers ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/max-in-flight-filter ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 20 11:07:15 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectcache ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-startinformers ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 20 11:07:15 crc kubenswrapper[4725]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 20 11:07:15 crc kubenswrapper[4725]: livez check failed Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.207196 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" podUID="cb0c9cf6-4966-4bd0-8933-823bc00e103c" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.244794 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:15 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:15 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.244844 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.271186 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-d55dfcdfc-d7t4z" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.303197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.303602 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.803588167 +0000 UTC m=+164.011910140 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.352841 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-744455d44c-g28q4" podStartSLOduration=141.352824629 podStartE2EDuration="2m21.352824629s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:15.351825299 +0000 UTC m=+163.560147272" watchObservedRunningTime="2026-01-20 11:07:15.352824629 +0000 UTC m=+163.561146602" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.372404 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406682 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") pod \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406766 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") pod \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406866 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.406949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") pod \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\" (UID: \"e2d56c6e-b9ad-4de9-8fe6-06b00293050e\") " Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.408342 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:15.908292668 +0000 UTC m=+164.116614641 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.413348 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume" (OuterVolumeSpecName: "config-volume") pod "e2d56c6e-b9ad-4de9-8fe6-06b00293050e" (UID: "e2d56c6e-b9ad-4de9-8fe6-06b00293050e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.430767 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl" (OuterVolumeSpecName: "kube-api-access-dw4rl") pod "e2d56c6e-b9ad-4de9-8fe6-06b00293050e" (UID: "e2d56c6e-b9ad-4de9-8fe6-06b00293050e"). InnerVolumeSpecName "kube-api-access-dw4rl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.431121 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "e2d56c6e-b9ad-4de9-8fe6-06b00293050e" (UID: "e2d56c6e-b9ad-4de9-8fe6-06b00293050e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.445267 4725 patch_prober.go:28] interesting pod/oauth-openshift-558db77b4-lhx4z container/oauth-openshift namespace/openshift-authentication: Readiness probe status=failure output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.445329 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" probeResult="failure" output="Get \"https://10.217.0.10:6443/healthz\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.458668 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-857f4d67dd-7j2sn" podStartSLOduration=140.458649146 podStartE2EDuration="2m20.458649146s" podCreationTimestamp="2026-01-20 11:04:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:15.457660566 +0000 UTC m=+163.665982539" watchObservedRunningTime="2026-01-20 11:07:15.458649146 +0000 UTC m=+163.666971119" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.475702 4725 generic.go:334] "Generic (PLEG): container finished" podID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerID="892418dd3e77ceab40f34a8a0fd5716151217dc2c55480d979119a50b49216a9" exitCode=0 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.475756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"892418dd3e77ceab40f34a8a0fd5716151217dc2c55480d979119a50b49216a9"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.475781 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerStarted","Data":"42297e2c5e4314f8ac19bdb872ed1cfccfa8006702130dd94931f10251920fbc"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.477897 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.501487 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.518292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"9caa16c46bc30be6e071b0e834721a3aa7b66b87e46c812829739a0491423617"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519319 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519365 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519375 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dw4rl\" (UniqueName: \"kubernetes.io/projected/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-kube-api-access-dw4rl\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.519393 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/e2d56c6e-b9ad-4de9-8fe6-06b00293050e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.519618 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:16.019607639 +0000 UTC m=+164.227929612 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.542365 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerStarted","Data":"a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.542415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerStarted","Data":"b3c438c94578ed127de08ab71e5b40caf95c66fe2d7a2b37a5e91dfd80db62be"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.579535 4725 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.622697 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.622996 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName:8f668bae-612b-4b75-9490-919e737c6a3b nodeName:}" failed. No retries permitted until 2026-01-20 11:07:16.122981898 +0000 UTC m=+164.331303861 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.646499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerStarted","Data":"1a440377416e2e3be97cb4385521f0b527fd44fc3d296005eb3a6215b7798a51"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.648997 4725 generic.go:334] "Generic (PLEG): container finished" podID="247dcae1-930b-476d-abbe-f33c3da0730b" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" exitCode=0 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.649043 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.653035 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.659345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9" event={"ID":"e2d56c6e-b9ad-4de9-8fe6-06b00293050e","Type":"ContainerDied","Data":"65a351e547318d4029df04eb1e821ccf32f46b5e2d9c44ec151c7be7e639c1ca"} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.659392 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="65a351e547318d4029df04eb1e821ccf32f46b5e2d9c44ec151c7be7e639c1ca" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.724696 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:15 crc kubenswrapper[4725]: E0120 11:07:15.725049 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8 podName: nodeName:}" failed. No retries permitted until 2026-01-20 11:07:16.225032785 +0000 UTC m=+164.433354758 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "image-registry-697d97f7c8-w5jhq" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.758429 4725 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-20T11:07:15.579558828Z","Handler":null,"Name":""} Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.786067 4725 csi_plugin.go:100] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.786114 4725 csi_plugin.go:113] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.827883 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.828630 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"8f668bae-612b-4b75-9490-919e737c6a3b\" (UID: \"8f668bae-612b-4b75-9490-919e737c6a3b\") " Jan 20 11:07:15 crc kubenswrapper[4725]: W0120 11:07:15.874249 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7865a54a_be9b_4a0a_8c84_b45c8bfe40e6.slice/crio-c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60 WatchSource:0}: Error finding container c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60: Status 404 returned error can't find the container with id c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60 Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.889589 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8") pod "8f668bae-612b-4b75-9490-919e737c6a3b" (UID: "8f668bae-612b-4b75-9490-919e737c6a3b"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 11:07:15 crc kubenswrapper[4725]: I0120 11:07:15.995190 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.062098 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.062161 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/1f4776af88835e41c12b831b4c9fed40233456d14189815a54dbe7f892fc1983/globalmount\"" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.144799 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-697d97f7c8-w5jhq\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.151516 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.270606 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:16 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:16 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:16 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.270883 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.325767 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.380036 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.830160 4725 generic.go:334] "Generic (PLEG): container finished" podID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerID="a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe" exitCode=0 Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.830395 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.848163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerStarted","Data":"fbfff8e8818beecfb8c02cfbcbeb21c81754f2aeda1e021b3b81559a276b8a66"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.852663 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerStarted","Data":"c8cf137c59938a71804fd93575de29dac65e3fbdae7d9616af8e1e0e425812c7"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.925882 4725 generic.go:334] "Generic (PLEG): container finished" podID="1ba77d4b-0178-4730-8869-389efdf58851" containerID="38beb6d6731fbc36ccb21ece2faf5cceb4d8191e98451bfd04d8127368937300" exitCode=0 Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.926161 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"38beb6d6731fbc36ccb21ece2faf5cceb4d8191e98451bfd04d8127368937300"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.972481 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8f668bae-612b-4b75-9490-919e737c6a3b" path="/var/lib/kubelet/pods/8f668bae-612b-4b75-9490-919e737c6a3b/volumes" Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.973034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerStarted","Data":"c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.976410 4725 generic.go:334] "Generic (PLEG): container finished" podID="39d02691-2128-45e8-841b-5bbf79e0a116" containerID="bef010ae40f12ebf94868b1a7f63b8c8ce98852cd1c4ccb364c0b676606ca709" exitCode=0 Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.976543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"bef010ae40f12ebf94868b1a7f63b8c8ce98852cd1c4ccb364c0b676606ca709"} Jan 20 11:07:16 crc kubenswrapper[4725]: I0120 11:07:16.976578 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerStarted","Data":"947644fa4cdb3ece3385cefa57c8a4ab47c9b07453257db4d816fb94806bf10c"} Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.086729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"c9730c818f2fe24e35cb8693b04250657f12a4654e60ab7b891225f0df5cbb35"} Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.259957 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:17 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:17 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:17 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.260318 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.740671 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.936613 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:17 crc kubenswrapper[4725]: I0120 11:07:17.942331 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-76f77b778f-twkw7" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.173056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerStarted","Data":"ed7560860908ee6c4f83f3490cbdd1843d5adf7ac8051897ed017552b83ca2ee"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.214611 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" event={"ID":"1bb3a268-d628-4c34-b9ca-38d43d82bf86","Type":"ContainerStarted","Data":"f469d9e066b529cf53b0a7c8792a55c1826f2aa074b17a33ebb83670eceeed8e"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.225055 4725 generic.go:334] "Generic (PLEG): container finished" podID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerID="79b3dc2509427f8e48ea65515f6bd240f048253490613646e6daeff65ff41302" exitCode=0 Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.225478 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"79b3dc2509427f8e48ea65515f6bd240f048253490613646e6daeff65ff41302"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.283224 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:18 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:18 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:18 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.283291 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.303322 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerID="06596abc1be5a61b774b86675bea7d758f393f271eafec99aee9e0618b84133b" exitCode=0 Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.303392 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"06596abc1be5a61b774b86675bea7d758f393f271eafec99aee9e0618b84133b"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.423332 4725 generic.go:334] "Generic (PLEG): container finished" podID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerID="9f5ff65ac43718d6c6a2cb0ff08d34aa44b3c5b853c8111fc5672b5c544f3567" exitCode=0 Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.424221 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"9f5ff65ac43718d6c6a2cb0ff08d34aa44b3c5b853c8111fc5672b5c544f3567"} Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.432959 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-9vt8w" podStartSLOduration=19.432941977 podStartE2EDuration="19.432941977s" podCreationTimestamp="2026-01-20 11:06:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:18.347500933 +0000 UTC m=+166.555822906" watchObservedRunningTime="2026-01-20 11:07:18.432941977 +0000 UTC m=+166.641263950" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.694775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:07:18 crc kubenswrapper[4725]: I0120 11:07:18.704774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/a5d55efc-e85a-4a02-a4ce-7355df9fea66-metrics-certs\") pod \"network-metrics-daemon-5lfc4\" (UID: \"a5d55efc-e85a-4a02-a4ce-7355df9fea66\") " pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.089935 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-5lfc4" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230070 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:19 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:19 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:19 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230184 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230184 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:19 crc kubenswrapper[4725]: E0120 11:07:19.230450 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerName="collect-profiles" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.230464 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerName="collect-profiles" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.232363 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" containerName="collect-profiles" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.232858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.251530 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager"/"installer-sa-dockercfg-kjl2n" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.251743 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager"/"kube-root-ca.crt" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.282954 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.417184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.417520 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.518419 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.518591 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:19 crc kubenswrapper[4725]: I0120 11:07:19.519188 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:19.941359 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerStarted","Data":"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b"} Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:19.941438 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.025131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.066361 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" podStartSLOduration=146.066336779 podStartE2EDuration="2m26.066336779s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:20.065317946 +0000 UTC m=+168.273639939" watchObservedRunningTime="2026-01-20 11:07:20.066336779 +0000 UTC m=+168.274658752" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.225612 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:20 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:20 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:20 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.225969 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.306770 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:20 crc kubenswrapper[4725]: I0120 11:07:20.571752 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-5lfc4"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.037815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" event={"ID":"a5d55efc-e85a-4a02-a4ce-7355df9fea66","Type":"ContainerStarted","Data":"0043dacebf82e1e855679316749abf1572b578bb3df75e31802796bae6941f2f"} Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.278641 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:21 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:21 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:21 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.278971 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.331297 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager/revision-pruner-9-crc"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.538590 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.540096 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.543364 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.543652 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.590844 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.620609 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.620667 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.707216 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-x85nm" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.722370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.722411 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.722503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.795860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"revision-pruner-8-crc\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:21 crc kubenswrapper[4725]: I0120 11:07:21.862501 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.225795 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:22 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:22 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:22 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.225869 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.236232 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerStarted","Data":"e5a4148505fb0e4a5e1b82e8ef6c225248aab25fa9c4ba3beafd03def4b81975"} Jan 20 11:07:22 crc kubenswrapper[4725]: I0120 11:07:22.832492 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-8-crc"] Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.057986 4725 patch_prober.go:28] interesting pod/console-f9d7485db-75nfb container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" start-of-body= Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.058072 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-console/console-f9d7485db-75nfb" podUID="b8859d17-62ea-47b3-ac63-537e69ec9f90" containerName="console" probeResult="failure" output="Get \"https://10.217.0.31:8443/health\": dial tcp 10.217.0.31:8443: connect: connection refused" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.230283 4725 patch_prober.go:28] interesting pod/router-default-5444994796-nxchh container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 20 11:07:23 crc kubenswrapper[4725]: [-]has-synced failed: reason withheld Jan 20 11:07:23 crc kubenswrapper[4725]: [+]process-running ok Jan 20 11:07:23 crc kubenswrapper[4725]: healthz check failed Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.230335 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-5444994796-nxchh" podUID="d19058e6-30ec-474e-bada-73b4981a9b65" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.259056 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.285255 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.509752 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.509844 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.512198 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.512259 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.592320 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerStarted","Data":"1c73f4f8089a92a0b6b7a028dac6aeb69d5b46fdbc672c2e6ac12f358ca9bcec"} Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.648845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerStarted","Data":"27dd8d1e6821e290aee0dbac19d45303743aa4766fc6094ca7f43758325a4a79"} Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.667304 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" event={"ID":"a5d55efc-e85a-4a02-a4ce-7355df9fea66","Type":"ContainerStarted","Data":"4141521a59d9a97f045efdc71ae0fcd4cedc726430929a21ff2b638ea2bb5d4d"} Jan 20 11:07:23 crc kubenswrapper[4725]: I0120 11:07:23.669747 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/revision-pruner-9-crc" podStartSLOduration=4.6697322230000005 podStartE2EDuration="4.669732223s" podCreationTimestamp="2026-01-20 11:07:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:23.666874264 +0000 UTC m=+171.875196237" watchObservedRunningTime="2026-01-20 11:07:23.669732223 +0000 UTC m=+171.878054196" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.279564 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.285701 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-5444994796-nxchh" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.715719 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-5lfc4" event={"ID":"a5d55efc-e85a-4a02-a4ce-7355df9fea66","Type":"ContainerStarted","Data":"71d14d5b8a89fa533d45e7a2e7ce7faed4b28b512da90a24eb88e6876290d391"} Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.764914 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerStarted","Data":"c17ac939ba1cf009322edad519220b6990322f13dc1944ac3985123b82ce45ca"} Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.783324 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerID="27dd8d1e6821e290aee0dbac19d45303743aa4766fc6094ca7f43758325a4a79" exitCode=0 Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.787830 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerDied","Data":"27dd8d1e6821e290aee0dbac19d45303743aa4766fc6094ca7f43758325a4a79"} Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.791343 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-8-crc" podStartSLOduration=3.791318997 podStartE2EDuration="3.791318997s" podCreationTimestamp="2026-01-20 11:07:21 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:24.788204719 +0000 UTC m=+172.996526692" watchObservedRunningTime="2026-01-20 11:07:24.791318997 +0000 UTC m=+172.999640970" Jan 20 11:07:24 crc kubenswrapper[4725]: I0120 11:07:24.791666 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-5lfc4" podStartSLOduration=150.791661708 podStartE2EDuration="2m30.791661708s" podCreationTimestamp="2026-01-20 11:04:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:07:24.73973104 +0000 UTC m=+172.948053033" watchObservedRunningTime="2026-01-20 11:07:24.791661708 +0000 UTC m=+172.999983681" Jan 20 11:07:25 crc kubenswrapper[4725]: I0120 11:07:25.866381 4725 generic.go:334] "Generic (PLEG): container finished" podID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerID="c17ac939ba1cf009322edad519220b6990322f13dc1944ac3985123b82ce45ca" exitCode=0 Jan 20 11:07:25 crc kubenswrapper[4725]: I0120 11:07:25.868391 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerDied","Data":"c17ac939ba1cf009322edad519220b6990322f13dc1944ac3985123b82ce45ca"} Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.453164 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.632280 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") pod \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.632393 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3ec338f6-dfbe-4760-b504-c0ad09ff73e4" (UID: "3ec338f6-dfbe-4760-b504-c0ad09ff73e4"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.632819 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") pod \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\" (UID: \"3ec338f6-dfbe-4760-b504-c0ad09ff73e4\") " Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.633302 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.657516 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3ec338f6-dfbe-4760-b504-c0ad09ff73e4" (UID: "3ec338f6-dfbe-4760-b504-c0ad09ff73e4"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.727667 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.727732 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.736233 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3ec338f6-dfbe-4760-b504-c0ad09ff73e4-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.895649 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/revision-pruner-9-crc" event={"ID":"3ec338f6-dfbe-4760-b504-c0ad09ff73e4","Type":"ContainerDied","Data":"e5a4148505fb0e4a5e1b82e8ef6c225248aab25fa9c4ba3beafd03def4b81975"} Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.895708 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e5a4148505fb0e4a5e1b82e8ef6c225248aab25fa9c4ba3beafd03def4b81975" Jan 20 11:07:26 crc kubenswrapper[4725]: I0120 11:07:26.895668 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/revision-pruner-9-crc" Jan 20 11:07:27 crc kubenswrapper[4725]: I0120 11:07:27.911581 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-8-crc" event={"ID":"1c3ba724-600e-4af4-ab50-ac02931703cd","Type":"ContainerDied","Data":"1c73f4f8089a92a0b6b7a028dac6aeb69d5b46fdbc672c2e6ac12f358ca9bcec"} Jan 20 11:07:27 crc kubenswrapper[4725]: I0120 11:07:27.911871 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1c73f4f8089a92a0b6b7a028dac6aeb69d5b46fdbc672c2e6ac12f358ca9bcec" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.005921 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.102529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") pod \"1c3ba724-600e-4af4-ab50-ac02931703cd\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.102610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") pod \"1c3ba724-600e-4af4-ab50-ac02931703cd\" (UID: \"1c3ba724-600e-4af4-ab50-ac02931703cd\") " Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.103281 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "1c3ba724-600e-4af4-ab50-ac02931703cd" (UID: "1c3ba724-600e-4af4-ab50-ac02931703cd"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.160588 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "1c3ba724-600e-4af4-ab50-ac02931703cd" (UID: "1c3ba724-600e-4af4-ab50-ac02931703cd"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.204338 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/1c3ba724-600e-4af4-ab50-ac02931703cd-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.204368 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/1c3ba724-600e-4af4-ab50-ac02931703cd-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:07:28 crc kubenswrapper[4725]: I0120 11:07:28.919845 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-8-crc" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.137962 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.144416 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-f9d7485db-75nfb" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419008 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419064 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419707 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419739 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.419777 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.420590 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56"} pod="openshift-console/downloads-7954f5f757-2hmdd" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.420671 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" containerID="cri-o://c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56" gracePeriod=2 Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.421631 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:33 crc kubenswrapper[4725]: I0120 11:07:33.421652 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:34 crc kubenswrapper[4725]: I0120 11:07:34.024277 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerID="c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56" exitCode=0 Jan 20 11:07:34 crc kubenswrapper[4725]: I0120 11:07:34.024488 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerDied","Data":"c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56"} Jan 20 11:07:36 crc kubenswrapper[4725]: I0120 11:07:36.385950 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:07:42 crc kubenswrapper[4725]: I0120 11:07:42.793268 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-xd92c" Jan 20 11:07:43 crc kubenswrapper[4725]: I0120 11:07:43.419057 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:43 crc kubenswrapper[4725]: I0120 11:07:43.419215 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:43 crc kubenswrapper[4725]: I0120 11:07:43.569358 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-789f6589d5-6hcj8" Jan 20 11:07:53 crc kubenswrapper[4725]: I0120 11:07:53.420378 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:07:53 crc kubenswrapper[4725]: I0120 11:07:53.420997 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.727333 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.727894 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.727946 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.728539 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:07:56 crc kubenswrapper[4725]: I0120 11:07:56.728608 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665" gracePeriod=600 Jan 20 11:07:57 crc kubenswrapper[4725]: I0120 11:07:57.634940 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665" exitCode=0 Jan 20 11:07:57 crc kubenswrapper[4725]: I0120 11:07:57.635037 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665"} Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.516459 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.516717 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-mk8lh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-vs4qk_openshift-marketplace(98dafc65-0a7c-41fd-abc5-8e8fba03ffa9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.518316 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.812414 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.900283 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/certified-operator-index:v4.18" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.900607 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/certified-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-rkgp6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod certified-operators-6n4zh_openshift-marketplace(7ebdb343-11c1-4e64-9538-98ca4298b821): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:07:58 crc kubenswrapper[4725]: E0120 11:07:58.902244 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/certified-operators-6n4zh" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146270 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 11:07:59 crc kubenswrapper[4725]: E0120 11:07:59.146599 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146611 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: E0120 11:07:59.146636 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146643 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146806 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ec338f6-dfbe-4760-b504-c0ad09ff73e4" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.146823 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1c3ba724-600e-4af4-ab50-ac02931703cd" containerName="pruner" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.147301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.150190 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.150639 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver"/"kube-root-ca.crt" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.150954 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver"/"installer-sa-dockercfg-5pr6n" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.176662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.176786 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.280260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.280315 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.280417 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.314021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"revision-pruner-9-crc\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:07:59 crc kubenswrapper[4725]: I0120 11:07:59.474632 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.139911 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.141916 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.163277 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.240222 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.240265 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.240283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348768 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348836 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348855 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.348976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.349063 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.368893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"installer-9-crc\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.419494 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.419624 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:03 crc kubenswrapper[4725]: I0120 11:08:03.474393 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:08:03 crc kubenswrapper[4725]: E0120 11:08:03.912279 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/certified-operator-index:v4.18\\\"\"" pod="openshift-marketplace/certified-operators-6n4zh" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" Jan 20 11:08:04 crc kubenswrapper[4725]: E0120 11:08:04.002361 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 11:08:04 crc kubenswrapper[4725]: E0120 11:08:04.002838 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-66ggl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-78bg4_openshift-marketplace(4f648359-ab53-49a7-8f1a-77281c2bd53c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:04 crc kubenswrapper[4725]: E0120 11:08:04.004001 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" Jan 20 11:08:07 crc kubenswrapper[4725]: E0120 11:08:07.299204 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" Jan 20 11:08:13 crc kubenswrapper[4725]: I0120 11:08:13.418586 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:13 crc kubenswrapper[4725]: I0120 11:08:13.419410 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.665944 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.666193 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-z8wq8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-vbr29_openshift-marketplace(247dcae1-930b-476d-abbe-f33c3da0730b): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.667683 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-vbr29" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.843578 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/community-operator-index:v4.18" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.843806 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/community-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m8h6b,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod community-operators-8pplm_openshift-marketplace(1ba77d4b-0178-4730-8869-389efdf58851): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:13 crc kubenswrapper[4725]: E0120 11:08:13.845062 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/community-operators-8pplm" podUID="1ba77d4b-0178-4730-8869-389efdf58851" Jan 20 11:08:14 crc kubenswrapper[4725]: E0120 11:08:14.411153 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-operator-index:v4.18" Jan 20 11:08:14 crc kubenswrapper[4725]: E0120 11:08:14.411335 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-operator-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-k8ntp,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-operators-6nxjc_openshift-marketplace(7865a54a-be9b-4a0a-8c84-b45c8bfe40e6): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:14 crc kubenswrapper[4725]: E0120 11:08:14.412606 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-operators-6nxjc" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.891356 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-vbr29" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.891529 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/community-operator-index:v4.18\\\"\"" pod="openshift-marketplace/community-operators-8pplm" podUID="1ba77d4b-0178-4730-8869-389efdf58851" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.892064 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-operator-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-operators-6nxjc" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.976439 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.976627 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-d2n6d,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-lxmdj_openshift-marketplace(39d02691-2128-45e8-841b-5bbf79e0a116): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.978099 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.986367 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/redhat/redhat-marketplace-index:v4.18" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.986525 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="init container &Container{Name:extract-content,Image:registry.redhat.io/redhat/redhat-marketplace-index:v4.18,Command:[/utilities/copy-content],Args:[--catalog.from=/configs --catalog.to=/extracted-catalog/catalog --cache.from=/tmp/cache --cache.to=/extracted-catalog/cache],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:utilities,ReadOnly:false,MountPath:/utilities,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:catalog-content,ReadOnly:false,MountPath:/extracted-catalog,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-fqcqh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000170000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod redhat-marketplace-c2jtp_openshift-marketplace(10de7f77-2b14-4c56-b4db-ebb93422b89c): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:08:16 crc kubenswrapper[4725]: E0120 11:08:16.989253 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" Jan 20 11:08:17 crc kubenswrapper[4725]: I0120 11:08:17.401107 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-9-crc"] Jan 20 11:08:17 crc kubenswrapper[4725]: I0120 11:08:17.411359 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-9-crc"] Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.772384 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerStarted","Data":"85530cce234d8a705121a8934ff7069e86642c36409985a7688a7884b5e723ae"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.773792 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerStarted","Data":"f3aec21c53a64aee3c2463f463b5a0fee8ad405f9757e5a135714fa18e74494f"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.775729 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.776961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.777162 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.777207 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:17.780090 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f"} Jan 20 11:08:18 crc kubenswrapper[4725]: E0120 11:08:17.784476 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" Jan 20 11:08:18 crc kubenswrapper[4725]: E0120 11:08:17.785096 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"extract-content\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.18\\\"\"" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.794880 4725 generic.go:334] "Generic (PLEG): container finished" podID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerID="058803a271e18294b6a526aecf968520aa7cedead52dfdc4165a6133e9e375f6" exitCode=0 Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.795052 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"058803a271e18294b6a526aecf968520aa7cedead52dfdc4165a6133e9e375f6"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.798982 4725 generic.go:334] "Generic (PLEG): container finished" podID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerID="e298ffa53486948221219263d81f91dd0aaf57b63b66a788f8e75324e688da37" exitCode=0 Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.799060 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"e298ffa53486948221219263d81f91dd0aaf57b63b66a788f8e75324e688da37"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.802554 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerStarted","Data":"bbb9f892391ca5a176419486af0aa396ba22c982eecb19372fb1e366d08efcd1"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.808453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerStarted","Data":"64c2f0c49873a789ba7136c0ebf69a0326342714a2ec4617a64b11082bb0b9da"} Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.809689 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.809727 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.836798 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-9-crc" podStartSLOduration=19.836779333 podStartE2EDuration="19.836779333s" podCreationTimestamp="2026-01-20 11:07:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:08:18.836135943 +0000 UTC m=+227.044457926" watchObservedRunningTime="2026-01-20 11:08:18.836779333 +0000 UTC m=+227.045101306" Jan 20 11:08:18 crc kubenswrapper[4725]: I0120 11:08:18.858628 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-9-crc" podStartSLOduration=15.858609629 podStartE2EDuration="15.858609629s" podCreationTimestamp="2026-01-20 11:08:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:08:18.85575977 +0000 UTC m=+227.064081743" watchObservedRunningTime="2026-01-20 11:08:18.858609629 +0000 UTC m=+227.066931602" Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.859951 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerStarted","Data":"31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405"} Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.862381 4725 generic.go:334] "Generic (PLEG): container finished" podID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerID="64c2f0c49873a789ba7136c0ebf69a0326342714a2ec4617a64b11082bb0b9da" exitCode=0 Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.862878 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerDied","Data":"64c2f0c49873a789ba7136c0ebf69a0326342714a2ec4617a64b11082bb0b9da"} Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.863914 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.864062 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:19 crc kubenswrapper[4725]: I0120 11:08:19.886945 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vs4qk" podStartSLOduration=6.055612665 podStartE2EDuration="1m9.886922817s" podCreationTimestamp="2026-01-20 11:07:10 +0000 UTC" firstStartedPulling="2026-01-20 11:07:15.477412268 +0000 UTC m=+163.685734231" lastFinishedPulling="2026-01-20 11:08:19.3087224 +0000 UTC m=+227.517044383" observedRunningTime="2026-01-20 11:08:19.884801341 +0000 UTC m=+228.093123354" watchObservedRunningTime="2026-01-20 11:08:19.886922817 +0000 UTC m=+228.095244790" Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.917666 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.918110 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.925213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerStarted","Data":"a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19"} Jan 20 11:08:20 crc kubenswrapper[4725]: I0120 11:08:20.943739 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6n4zh" podStartSLOduration=7.89163911 podStartE2EDuration="1m10.943709888s" podCreationTimestamp="2026-01-20 11:07:10 +0000 UTC" firstStartedPulling="2026-01-20 11:07:16.833477785 +0000 UTC m=+165.041799758" lastFinishedPulling="2026-01-20 11:08:19.885548563 +0000 UTC m=+228.093870536" observedRunningTime="2026-01-20 11:08:20.942084197 +0000 UTC m=+229.150406180" watchObservedRunningTime="2026-01-20 11:08:20.943709888 +0000 UTC m=+229.152031861" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.383330 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561265 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") pod \"3bad494d-da48-47e2-bcba-3908cecfbb5a\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561433 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") pod \"3bad494d-da48-47e2-bcba-3908cecfbb5a\" (UID: \"3bad494d-da48-47e2-bcba-3908cecfbb5a\") " Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561477 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "3bad494d-da48-47e2-bcba-3908cecfbb5a" (UID: "3bad494d-da48-47e2-bcba-3908cecfbb5a"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.561874 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/3bad494d-da48-47e2-bcba-3908cecfbb5a-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.700385 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "3bad494d-da48-47e2-bcba-3908cecfbb5a" (UID: "3bad494d-da48-47e2-bcba-3908cecfbb5a"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:08:21 crc kubenswrapper[4725]: I0120 11:08:21.701504 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/3bad494d-da48-47e2-bcba-3908cecfbb5a-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:22 crc kubenswrapper[4725]: I0120 11:08:22.061789 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-9-crc" event={"ID":"3bad494d-da48-47e2-bcba-3908cecfbb5a","Type":"ContainerDied","Data":"f3aec21c53a64aee3c2463f463b5a0fee8ad405f9757e5a135714fa18e74494f"} Jan 20 11:08:22 crc kubenswrapper[4725]: I0120 11:08:22.062230 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f3aec21c53a64aee3c2463f463b5a0fee8ad405f9757e5a135714fa18e74494f" Jan 20 11:08:22 crc kubenswrapper[4725]: I0120 11:08:22.062340 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-9-crc" Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.275786 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:23 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:23 crc kubenswrapper[4725]: > Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418578 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418659 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418684 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:23 crc kubenswrapper[4725]: I0120 11:08:23.418746 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:24 crc kubenswrapper[4725]: I0120 11:08:24.072345 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerStarted","Data":"9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059"} Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.746315 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.746920 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.885637 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.904717 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:30 crc kubenswrapper[4725]: I0120 11:08:30.947488 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.274905 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerID="9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059" exitCode=0 Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.275770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059"} Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.892537 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:08:31 crc kubenswrapper[4725]: I0120 11:08:31.967686 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z"] Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.852528 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.853871 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.857008 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.857061 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.906253 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:08:33 crc kubenswrapper[4725]: I0120 11:08:33.906487 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vs4qk" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" containerID="cri-o://31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405" gracePeriod=2 Jan 20 11:08:34 crc kubenswrapper[4725]: I0120 11:08:34.906220 4725 generic.go:334] "Generic (PLEG): container finished" podID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerID="31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405" exitCode=0 Jan 20 11:08:34 crc kubenswrapper[4725]: I0120 11:08:34.906302 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405"} Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.829288 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.923423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") pod \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.929362 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh" (OuterVolumeSpecName: "kube-api-access-mk8lh") pod "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" (UID: "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9"). InnerVolumeSpecName "kube-api-access-mk8lh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.935158 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vs4qk" Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.935529 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vs4qk" event={"ID":"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9","Type":"ContainerDied","Data":"42297e2c5e4314f8ac19bdb872ed1cfccfa8006702130dd94931f10251920fbc"} Jan 20 11:08:37 crc kubenswrapper[4725]: I0120 11:08:37.935718 4725 scope.go:117] "RemoveContainer" containerID="31b12e5532ee13e8b75aff820013764c6b144d32beb1dc6c9a164e160d1c5405" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.025239 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") pod \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.025612 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") pod \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\" (UID: \"98dafc65-0a7c-41fd-abc5-8e8fba03ffa9\") " Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.025852 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mk8lh\" (UniqueName: \"kubernetes.io/projected/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-kube-api-access-mk8lh\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.026964 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities" (OuterVolumeSpecName: "utilities") pod "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" (UID: "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.096788 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" (UID: "98dafc65-0a7c-41fd-abc5-8e8fba03ffa9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.126755 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.126836 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.265847 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.272199 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vs4qk"] Jan 20 11:08:38 crc kubenswrapper[4725]: I0120 11:08:38.940189 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" path="/var/lib/kubelet/pods/98dafc65-0a7c-41fd-abc5-8e8fba03ffa9/volumes" Jan 20 11:08:39 crc kubenswrapper[4725]: I0120 11:08:39.960715 4725 scope.go:117] "RemoveContainer" containerID="058803a271e18294b6a526aecf968520aa7cedead52dfdc4165a6133e9e375f6" Jan 20 11:08:41 crc kubenswrapper[4725]: I0120 11:08:41.806842 4725 scope.go:117] "RemoveContainer" containerID="892418dd3e77ceab40f34a8a0fd5716151217dc2c55480d979119a50b49216a9" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418697 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418693 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418788 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418823 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.418851 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419455 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="download-server" containerStatusID={"Type":"cri-o","ID":"224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29"} pod="openshift-console/downloads-7954f5f757-2hmdd" containerMessage="Container download-server failed liveness probe, will be restarted" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419469 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419498 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" containerID="cri-o://224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29" gracePeriod=2 Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.419516 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.975324 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerID="224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29" exitCode=0 Jan 20 11:08:43 crc kubenswrapper[4725]: I0120 11:08:43.975369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerDied","Data":"224d9eef3e311d8f59659266cbc05e855c7275a59ee55b347942977918828c29"} Jan 20 11:08:47 crc kubenswrapper[4725]: I0120 11:08:47.315626 4725 scope.go:117] "RemoveContainer" containerID="c40d839a198aef3fd3dea37307bd509fe523f84265c35bd129742ca5ff0a0f56" Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.011608 4725 generic.go:334] "Generic (PLEG): container finished" podID="247dcae1-930b-476d-abbe-f33c3da0730b" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.011696 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.014142 4725 generic.go:334] "Generic (PLEG): container finished" podID="39d02691-2128-45e8-841b-5bbf79e0a116" containerID="5d88e1156fdd2131fb13a542776647afc695e341abc2d0bb759d85d523d36656" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.014216 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"5d88e1156fdd2131fb13a542776647afc695e341abc2d0bb759d85d523d36656"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.017624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-7954f5f757-2hmdd" event={"ID":"6c5d8a1b-5c54-4877-8739-a83ab530197d","Type":"ContainerStarted","Data":"1081f83e5b2bc14f68fc29ac53c72e97033bcc38b173413314e21a99e6b6dbfc"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.018534 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.018630 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.018664 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.020960 4725 generic.go:334] "Generic (PLEG): container finished" podID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerID="3aebd70372873b9fbd7b4e02c72fa5025a0936f55bfdb8b39fafb1a0022fe117" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.021035 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"3aebd70372873b9fbd7b4e02c72fa5025a0936f55bfdb8b39fafb1a0022fe117"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.023465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerStarted","Data":"4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.026316 4725 generic.go:334] "Generic (PLEG): container finished" podID="1ba77d4b-0178-4730-8869-389efdf58851" containerID="95b3efd0e36287cff3884a1d24955133183f96b36b4ed22b901a472384a7ccb9" exitCode=0 Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.026350 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"95b3efd0e36287cff3884a1d24955133183f96b36b4ed22b901a472384a7ccb9"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.028601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerStarted","Data":"6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9"} Jan 20 11:08:49 crc kubenswrapper[4725]: I0120 11:08:49.103151 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78bg4" podStartSLOduration=7.128327411 podStartE2EDuration="1m36.103124124s" podCreationTimestamp="2026-01-20 11:07:13 +0000 UTC" firstStartedPulling="2026-01-20 11:07:18.33313803 +0000 UTC m=+166.541460003" lastFinishedPulling="2026-01-20 11:08:47.307934743 +0000 UTC m=+255.516256716" observedRunningTime="2026-01-20 11:08:49.100645356 +0000 UTC m=+257.308967339" watchObservedRunningTime="2026-01-20 11:08:49.103124124 +0000 UTC m=+257.311446097" Jan 20 11:08:50 crc kubenswrapper[4725]: I0120 11:08:50.084595 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:50 crc kubenswrapper[4725]: I0120 11:08:50.084659 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.478107 4725 generic.go:334] "Generic (PLEG): container finished" podID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerID="6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9" exitCode=0 Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.478481 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.482292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerStarted","Data":"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.485561 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerStarted","Data":"d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.487952 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerStarted","Data":"a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.492328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerStarted","Data":"fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1"} Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.492957 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.493099 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.618695 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-c2jtp" podStartSLOduration=7.802667533 podStartE2EDuration="1m39.618673486s" podCreationTimestamp="2026-01-20 11:07:12 +0000 UTC" firstStartedPulling="2026-01-20 11:07:18.283402152 +0000 UTC m=+166.491724125" lastFinishedPulling="2026-01-20 11:08:50.099408115 +0000 UTC m=+258.307730078" observedRunningTime="2026-01-20 11:08:51.614199886 +0000 UTC m=+259.822521849" watchObservedRunningTime="2026-01-20 11:08:51.618673486 +0000 UTC m=+259.826995459" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.675636 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-lxmdj" podStartSLOduration=6.872450086 podStartE2EDuration="1m39.675617756s" podCreationTimestamp="2026-01-20 11:07:12 +0000 UTC" firstStartedPulling="2026-01-20 11:07:17.01723682 +0000 UTC m=+165.225558793" lastFinishedPulling="2026-01-20 11:08:49.82040449 +0000 UTC m=+258.028726463" observedRunningTime="2026-01-20 11:08:51.672130316 +0000 UTC m=+259.880452309" watchObservedRunningTime="2026-01-20 11:08:51.675617756 +0000 UTC m=+259.883939729" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.717898 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-vbr29" podStartSLOduration=7.505726522 podStartE2EDuration="1m41.717875203s" podCreationTimestamp="2026-01-20 11:07:10 +0000 UTC" firstStartedPulling="2026-01-20 11:07:15.650444834 +0000 UTC m=+163.858766807" lastFinishedPulling="2026-01-20 11:08:49.862593515 +0000 UTC m=+258.070915488" observedRunningTime="2026-01-20 11:08:51.692569388 +0000 UTC m=+259.900891371" watchObservedRunningTime="2026-01-20 11:08:51.717875203 +0000 UTC m=+259.926197176" Jan 20 11:08:51 crc kubenswrapper[4725]: I0120 11:08:51.718156 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-8pplm" podStartSLOduration=9.837646032 podStartE2EDuration="1m42.718149332s" podCreationTimestamp="2026-01-20 11:07:09 +0000 UTC" firstStartedPulling="2026-01-20 11:07:16.931767114 +0000 UTC m=+165.140089077" lastFinishedPulling="2026-01-20 11:08:49.812270404 +0000 UTC m=+258.020592377" observedRunningTime="2026-01-20 11:08:51.715274781 +0000 UTC m=+259.923596754" watchObservedRunningTime="2026-01-20 11:08:51.718149332 +0000 UTC m=+259.926471315" Jan 20 11:08:52 crc kubenswrapper[4725]: I0120 11:08:52.992021 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:08:52 crc kubenswrapper[4725]: I0120 11:08:52.992771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484207 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484527 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484238 4725 patch_prober.go:28] interesting pod/downloads-7954f5f757-2hmdd container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" start-of-body= Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.484675 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-7954f5f757-2hmdd" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.15:8080/\": dial tcp 10.217.0.15:8080: connect: connection refused" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.743831 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:08:53 crc kubenswrapper[4725]: I0120 11:08:53.743994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.102203 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:54 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:54 crc kubenswrapper[4725]: > Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.631672 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerStarted","Data":"7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806"} Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.653340 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.653397 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:08:54 crc kubenswrapper[4725]: I0120 11:08:54.872813 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6nxjc" podStartSLOduration=7.796789785 podStartE2EDuration="1m41.872798066s" podCreationTimestamp="2026-01-20 11:07:13 +0000 UTC" firstStartedPulling="2026-01-20 11:07:18.42894251 +0000 UTC m=+166.637264483" lastFinishedPulling="2026-01-20 11:08:52.504950791 +0000 UTC m=+260.713272764" observedRunningTime="2026-01-20 11:08:54.87070837 +0000 UTC m=+263.079030343" watchObservedRunningTime="2026-01-20 11:08:54.872798066 +0000 UTC m=+263.081120039" Jan 20 11:08:55 crc kubenswrapper[4725]: I0120 11:08:55.072982 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:55 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:55 crc kubenswrapper[4725]: > Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059606 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.059936 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-utilities" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059952 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-utilities" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.059969 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-content" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059976 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="extract-content" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.059988 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerName="pruner" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.059996 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerName="pruner" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.060016 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060023 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060196 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3bad494d-da48-47e2-bcba-3908cecfbb5a" containerName="pruner" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060213 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="98dafc65-0a7c-41fd-abc5-8e8fba03ffa9" containerName="registry-server" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.060780 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062347 4725 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062619 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" containerID="cri-o://e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062840 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" containerID="cri-o://3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062899 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062936 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.062984 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" containerID="cri-o://9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578" gracePeriod=15 Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064256 4725 kubelet.go:2421] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064442 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064457 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="setup" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064498 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064507 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064525 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064533 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064544 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064552 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064560 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064568 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064580 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064587 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: E0120 11:08:56.064599 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064608 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064749 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-regeneration-controller" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064763 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064773 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-insecure-readyz" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064782 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-check-endpoints" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064791 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver-cert-syncer" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.064800 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f4b27818a5e8e43d0dc095d08835c792" containerName="kube-apiserver" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.116018 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" probeResult="failure" output=< Jan 20 11:08:56 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:08:56 crc kubenswrapper[4725]: > Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.224637 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225065 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225098 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225126 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225162 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.225181 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.327567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328298 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328548 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328677 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.328900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.329131 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.330512 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371521 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371631 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371657 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/71bb4a3aecc4ba5b26c4b7318770ce13-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"71bb4a3aecc4ba5b26c4b7318770ce13\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371678 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371704 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371733 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:56 crc kubenswrapper[4725]: I0120 11:08:56.371762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.043017 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" containerID="cri-o://6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34" gracePeriod=15 Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.657916 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.659563 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:57 crc kubenswrapper[4725]: I0120 11:08:57.660342 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578" exitCode=2 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.672244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-check-endpoints/0.log" Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.675928 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678223 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7" exitCode=0 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678269 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de" exitCode=0 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678285 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89" exitCode=0 Jan 20 11:08:58 crc kubenswrapper[4725]: I0120 11:08:58.678369 4725 scope.go:117] "RemoveContainer" containerID="809f28d933d0375e6263b3771bc18584a2770e6ad28cb94ee76d3443997bda1b" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.686035 4725 generic.go:334] "Generic (PLEG): container finished" podID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerID="6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34" exitCode=0 Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.686149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerDied","Data":"6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34"} Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.688272 4725 generic.go:334] "Generic (PLEG): container finished" podID="9d51d3df-3326-410b-b913-a269f46bb674" containerID="bbb9f892391ca5a176419486af0aa396ba22c982eecb19372fb1e366d08efcd1" exitCode=0 Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.688354 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerDied","Data":"bbb9f892391ca5a176419486af0aa396ba22c982eecb19372fb1e366d08efcd1"} Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.689019 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.694224 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.695404 4725 generic.go:334] "Generic (PLEG): container finished" podID="f4b27818a5e8e43d0dc095d08835c792" containerID="e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd" exitCode=0 Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.934563 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.939964 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.940564 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.940748 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.942846 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.943243 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.943470 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.943662 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975575 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975652 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975703 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975736 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975769 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975792 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975835 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975853 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975884 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975909 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975935 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975968 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.975995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") pod \"f4b27818a5e8e43d0dc095d08835c792\" (UID: \"f4b27818a5e8e43d0dc095d08835c792\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.976012 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.976032 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.976051 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") pod \"9a6106c0-75fa-4285-bc23-06ced58cf133\" (UID: \"9a6106c0-75fa-4285-bc23-06ced58cf133\") " Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.977279 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978312 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978333 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978419 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978444 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f4b27818a5e8e43d0dc095d08835c792" (UID: "f4b27818a5e8e43d0dc095d08835c792"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978468 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.978703 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.984719 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.985171 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.985675 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw" (OuterVolumeSpecName: "kube-api-access-8d2rw") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "kube-api-access-8d2rw". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.986647 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.987014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.987009 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:08:59 crc kubenswrapper[4725]: I0120 11:08:59.987542 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.018024 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.018120 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "9a6106c0-75fa-4285-bc23-06ced58cf133" (UID: "9a6106c0-75fa-4285-bc23-06ced58cf133"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077729 4725 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077777 4725 reconciler_common.go:293] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077794 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077809 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077824 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077836 4725 reconciler_common.go:293] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077852 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077869 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077885 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8d2rw\" (UniqueName: \"kubernetes.io/projected/9a6106c0-75fa-4285-bc23-06ced58cf133-kube-api-access-8d2rw\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077897 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077909 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077921 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077934 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077945 4725 reconciler_common.go:293] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077957 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077970 4725 reconciler_common.go:293] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/9a6106c0-75fa-4285-bc23-06ced58cf133-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.077984 4725 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f4b27818a5e8e43d0dc095d08835c792-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.703097 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" event={"ID":"9a6106c0-75fa-4285-bc23-06ced58cf133","Type":"ContainerDied","Data":"f39928c8d7256975b95a8abe066b49247f38d754512e9fe57502d4feea0d8501"} Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.703154 4725 scope.go:117] "RemoveContainer" containerID="6d2fdd10ad23f57b144fbbf33de8f10cee0a91d14a076d0c7c7fb512c2d47b34" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.703260 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.704742 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.705299 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.705512 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.707066 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_f4b27818a5e8e43d0dc095d08835c792/kube-apiserver-cert-syncer/0.log" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.709014 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.730981 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.731049 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.732649 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.734221 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.739026 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.739942 4725 scope.go:117] "RemoveContainer" containerID="3e0809e2904fa71f91f8d861caae21537b46a4f68de4f644aa6e7850e91d2ec7" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.740095 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.740714 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.741278 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.741789 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.748322 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.764878 4725 scope.go:117] "RemoveContainer" containerID="01e609e75f5d8376581ab27980529db3f1895e2d0f6a80baa706d2bd6d0e87de" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.786445 4725 scope.go:117] "RemoveContainer" containerID="660cc9c54fe4d2d3b9c14efc32078323fa6d251f9c9b6e6fa12b2925ffbaeb89" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.794140 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.796354 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.796812 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.801370 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.801888 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.805991 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.811438 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.815296 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.815359 4725 scope.go:117] "RemoveContainer" containerID="9c7b0a9f0724e2b327f58108134bcfe3b0aa254b4f5c20eb0b64e85c880c9578" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.815766 4725 status_manager.go:851] "Failed to get status for pod" podUID="f4b27818a5e8e43d0dc095d08835c792" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.816112 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.816345 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.836107 4725 scope.go:117] "RemoveContainer" containerID="e7e0e8c3e3da559e76b8cee52e00d8698c7d9451999cadda35a41b3edbcea3fd" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.851284 4725 scope.go:117] "RemoveContainer" containerID="b0fcf47a4858e69ef63a001c16236e70b37cad669915d4bfa1a8375cc5c27527" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.938963 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f4b27818a5e8e43d0dc095d08835c792" path="/var/lib/kubelet/pods/f4b27818a5e8e43d0dc095d08835c792/volumes" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.977308 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.978007 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.978465 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.978814 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:00 crc kubenswrapper[4725]: I0120 11:09:00.979129 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089129 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") pod \"9d51d3df-3326-410b-b913-a269f46bb674\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089199 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") pod \"9d51d3df-3326-410b-b913-a269f46bb674\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089247 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") pod \"9d51d3df-3326-410b-b913-a269f46bb674\" (UID: \"9d51d3df-3326-410b-b913-a269f46bb674\") " Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089305 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "9d51d3df-3326-410b-b913-a269f46bb674" (UID: "9d51d3df-3326-410b-b913-a269f46bb674"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089380 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock" (OuterVolumeSpecName: "var-lock") pod "9d51d3df-3326-410b-b913-a269f46bb674" (UID: "9d51d3df-3326-410b-b913-a269f46bb674"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089691 4725 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.089715 4725 reconciler_common.go:293] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/9d51d3df-3326-410b-b913-a269f46bb674-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.094714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "9d51d3df-3326-410b-b913-a269f46bb674" (UID: "9d51d3df-3326-410b-b913-a269f46bb674"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.190760 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/9d51d3df-3326-410b-b913-a269f46bb674-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:01 crc kubenswrapper[4725]: E0120 11:09:01.205731 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.206403 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:01 crc kubenswrapper[4725]: W0120 11:09:01.229377 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf85e55b1a89d02b0cb034b1ea31ed45a.slice/crio-fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e WatchSource:0}: Error finding container fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e: Status 404 returned error can't find the container with id fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e Jan 20 11:09:01 crc kubenswrapper[4725]: E0120 11:09:01.232369 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c6bdad2b37894 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,LastTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.720096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-9-crc" event={"ID":"9d51d3df-3326-410b-b913-a269f46bb674","Type":"ContainerDied","Data":"85530cce234d8a705121a8934ff7069e86642c36409985a7688a7884b5e723ae"} Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.720454 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="85530cce234d8a705121a8934ff7069e86642c36409985a7688a7884b5e723ae" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.720135 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-9-crc" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.721989 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"fd7cff061a986922e1b4f28eb5e269b198a893eb66424f0de574e27fc343138e"} Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.737970 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.741377 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.741821 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.742229 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.767033 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.767783 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.768143 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.768320 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.768475 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.771295 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.771742 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.772003 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.772285 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:01 crc kubenswrapper[4725]: I0120 11:09:01.772664 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.087594 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.088072 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.091444 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.091946 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.107224 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f85e55b1a89d02b0cb034b1ea31ed45a","Type":"ContainerStarted","Data":"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b"} Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.136048 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.136860 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.137311 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.137683 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.137978 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.138251 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.171563 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.172128 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.172591 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.173148 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.173421 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.173706 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.430210 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-7954f5f757-2hmdd" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.430801 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.431246 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.431518 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.431839 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.432060 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.432304 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.784694 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.785608 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786036 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786380 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786682 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.786977 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.787275 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.787567 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.832003 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.832808 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.833411 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.833766 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834062 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834432 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834686 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:03 crc kubenswrapper[4725]: I0120 11:09:03.834950 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.114220 4725 kubelet.go:1929] "Failed creating a mirror pod for" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.114631 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115048 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115578 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115797 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.115989 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.116199 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.116386 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.433987 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.434932 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.436194 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.436765 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.437740 4725 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.437811 4725 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.438411 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="200ms" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.511438 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.511494 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.551597 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.552152 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.552580 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.552855 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.553204 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.553487 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.553747 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.554109 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.554585 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: E0120 11:09:04.639524 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="400ms" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.698869 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.699580 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.699924 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.700281 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.700672 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.701191 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.701541 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.701848 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.702207 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.702459 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.735117 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.735841 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.736360 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.736820 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737141 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737410 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737623 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.737884 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.738193 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:04 crc kubenswrapper[4725]: I0120 11:09:04.738435 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: E0120 11:09:05.040508 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="800ms" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.164051 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.164702 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.165396 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.165711 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.166011 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.166331 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.166667 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.167331 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.167925 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: I0120 11:09:05.168302 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:05 crc kubenswrapper[4725]: E0120 11:09:05.841534 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="1.6s" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.932135 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.932824 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.933142 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.933437 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.933855 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.934231 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.934908 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.935207 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.935529 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.935770 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.949837 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.949879 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:06 crc kubenswrapper[4725]: E0120 11:09:06.950361 4725 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:06 crc kubenswrapper[4725]: I0120 11:09:06.950875 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:06 crc kubenswrapper[4725]: W0120 11:09:06.972567 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod71bb4a3aecc4ba5b26c4b7318770ce13.slice/crio-dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c WatchSource:0}: Error finding container dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c: Status 404 returned error can't find the container with id dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c Jan 20 11:09:07 crc kubenswrapper[4725]: I0120 11:09:07.194929 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"dba1315d3256f5e83955584b627532772239c8907a2e0464ac710d96d5ea985c"} Jan 20 11:09:07 crc kubenswrapper[4725]: E0120 11:09:07.442340 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="3.2s" Jan 20 11:09:08 crc kubenswrapper[4725]: E0120 11:09:08.353747 4725 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.194:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188c6bdad2b37894 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f85e55b1a89d02b0cb034b1ea31ed45a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:9f36dc276e27753fc478274c7f7814a4f8945c987117ee1ea3b8e6355e6d7462\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,LastTimestamp:2026-01-20 11:09:01.231782036 +0000 UTC m=+269.440104009,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.219971 4725 generic.go:334] "Generic (PLEG): container finished" podID="71bb4a3aecc4ba5b26c4b7318770ce13" containerID="5e0ee4a8520f2950257bde6114c647cf2018446a23f9ee85a6195ee80f1f56b5" exitCode=0 Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.220111 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerDied","Data":"5e0ee4a8520f2950257bde6114c647cf2018446a23f9ee85a6195ee80f1f56b5"} Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.220331 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.220485 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.221120 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: E0120 11:09:10.221146 4725 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.221628 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.221848 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222033 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222237 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222490 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222755 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.222987 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: I0120 11:09:10.223268 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:10 crc kubenswrapper[4725]: E0120 11:09:10.643127 4725 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.194:6443: connect: connection refused" interval="6.4s" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.233144 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.233493 4725 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b" exitCode=1 Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.233525 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b"} Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.234495 4725 scope.go:117] "RemoveContainer" containerID="bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.241644 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242044 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242492 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242698 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.242897 4725 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243109 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243297 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243475 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243659 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:11 crc kubenswrapper[4725]: I0120 11:09:11.243838 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.942983 4725 status_manager.go:851] "Failed to get status for pod" podUID="6c5d8a1b-5c54-4877-8739-a83ab530197d" pod="openshift-console/downloads-7954f5f757-2hmdd" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-console/pods/downloads-7954f5f757-2hmdd\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.943730 4725 status_manager.go:851] "Failed to get status for pod" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" pod="openshift-marketplace/redhat-marketplace-lxmdj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-lxmdj\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.944017 4725 status_manager.go:851] "Failed to get status for pod" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" pod="openshift-marketplace/community-operators-vbr29" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-vbr29\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.944436 4725 status_manager.go:851] "Failed to get status for pod" podUID="71bb4a3aecc4ba5b26c4b7318770ce13" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.944789 4725 status_manager.go:851] "Failed to get status for pod" podUID="f614b9022728cf315e60c057852e563e" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-controller-manager/pods/kube-controller-manager-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.945202 4725 status_manager.go:851] "Failed to get status for pod" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" pod="openshift-marketplace/redhat-operators-6nxjc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-6nxjc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.945946 4725 status_manager.go:851] "Failed to get status for pod" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" pod="openshift-marketplace/redhat-operators-78bg4" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-operators-78bg4\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.946245 4725 status_manager.go:851] "Failed to get status for pod" podUID="1ba77d4b-0178-4730-8869-389efdf58851" pod="openshift-marketplace/community-operators-8pplm" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/community-operators-8pplm\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.946471 4725 status_manager.go:851] "Failed to get status for pod" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" pod="openshift-marketplace/redhat-marketplace-c2jtp" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-marketplace/pods/redhat-marketplace-c2jtp\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.947388 4725 status_manager.go:851] "Failed to get status for pod" podUID="9d51d3df-3326-410b-b913-a269f46bb674" pod="openshift-kube-apiserver/installer-9-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-9-crc\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:12 crc kubenswrapper[4725]: I0120 11:09:12.947834 4725 status_manager.go:851] "Failed to get status for pod" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" pod="openshift-authentication/oauth-openshift-558db77b4-lhx4z" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-558db77b4-lhx4z\": dial tcp 38.102.83.194:6443: connect: connection refused" Jan 20 11:09:13 crc kubenswrapper[4725]: I0120 11:09:13.268461 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 11:09:13 crc kubenswrapper[4725]: I0120 11:09:13.268635 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1"} Jan 20 11:09:13 crc kubenswrapper[4725]: I0120 11:09:13.272678 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"63712aa375616fbb699d9ac705043ae2bc23a9d78e9375d0563fd696b1c43981"} Jan 20 11:09:16 crc kubenswrapper[4725]: I0120 11:09:14.290834 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"6cfb902102eae93f880d5ef7a90008815ea13a18c8ff67faea8ac54f1d76ad94"} Jan 20 11:09:16 crc kubenswrapper[4725]: I0120 11:09:14.925420 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:09:17 crc kubenswrapper[4725]: I0120 11:09:17.660411 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"5b770a508b53ff718313002ad309dae0bc6d52414cdf6eb7477d3fe7aafffb1f"} Jan 20 11:09:18 crc kubenswrapper[4725]: I0120 11:09:18.679741 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"3e3a584572c50ff3363670c823b96eafff257bc8507772487be9d9e56f398344"} Jan 20 11:09:18 crc kubenswrapper[4725]: I0120 11:09:18.681004 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"71bb4a3aecc4ba5b26c4b7318770ce13","Type":"ContainerStarted","Data":"83a80a7126789b925a69fd547e0f7c325040d0f767b1efb3ba0ceec4cc88a515"} Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.686623 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.686689 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.687856 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.696380 4725 kubelet.go:1914] "Deleted mirror pod because it is outdated" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.794053 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.794398 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 11:09:19 crc kubenswrapper[4725]: I0120 11:09:19.794482 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 11:09:20 crc kubenswrapper[4725]: I0120 11:09:20.692414 4725 kubelet.go:1909] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:20 crc kubenswrapper[4725]: I0120 11:09:20.693209 4725 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="bb0f33a8-410b-4912-82e7-7ef77344fd80" Jan 20 11:09:21 crc kubenswrapper[4725]: I0120 11:09:21.053901 4725 status_manager.go:861] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="71bb4a3aecc4ba5b26c4b7318770ce13" podUID="27c0694a-974e-4403-b573-13de25d37a48" Jan 20 11:09:29 crc kubenswrapper[4725]: I0120 11:09:29.794599 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 11:09:29 crc kubenswrapper[4725]: I0120 11:09:29.796475 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.197759 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"default-dockercfg-2q5b6" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.243350 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"default-cni-sysctl-allowlist" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.366653 4725 reflector.go:368] Caches populated for *v1.RuntimeClass from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.530210 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"kube-root-ca.crt" Jan 20 11:09:32 crc kubenswrapper[4725]: I0120 11:09:32.769512 4725 cert_rotation.go:91] certificate rotation detected, shutting down client connections to start using new credentials Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.044290 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.221988 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"marketplace-trusted-ca" Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.540921 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"openshift-service-ca.crt" Jan 20 11:09:33 crc kubenswrapper[4725]: I0120 11:09:33.946809 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-operator"/"metrics-tls" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.103808 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.103833 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"cluster-version-operator-serving-cert" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.105205 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"openshift-service-ca.crt" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.154069 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-dockercfg-r9srn" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.156016 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-dockercfg-xtcjv" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.278789 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"kube-root-ca.crt" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.307962 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"console-operator-dockercfg-4xjcr" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.314373 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.514278 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"machine-api-operator-images" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.554696 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"trusted-ca-bundle" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.707422 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-client" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.779099 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"encryption-config-1" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.789378 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"config" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.871477 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"signing-key" Jan 20 11:09:34 crc kubenswrapper[4725]: I0120 11:09:34.892639 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.128074 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-ca-bundle" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.169897 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-config" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.264262 4725 reflector.go:368] Caches populated for *v1.Service from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.280410 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.380178 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"config-operator-serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.482701 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"openshift-apiserver-sa-dockercfg-djjff" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.484802 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-root-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.526135 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"kube-root-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.531548 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"openshift-service-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.599453 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-control-plane-metrics-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.633737 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"openshift-service-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.856540 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serviceaccount-dockercfg-rq7zk" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.884204 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver-operator"/"openshift-apiserver-operator-serving-cert" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.919847 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"kube-root-ca.crt" Jan 20 11:09:35 crc kubenswrapper[4725]: I0120 11:09:35.921124 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-admission-controller-secret" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.115328 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"metrics-tls" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.122907 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.145036 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-service-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.209242 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"kube-root-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.232251 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"cluster-samples-operator-dockercfg-xpp9w" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.234930 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-stats-default" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.281253 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-version"/"default-dockercfg-gxtc4" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.310316 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"kube-rbac-proxy" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.311830 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-serving-cert" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.348662 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.404626 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"ingress-operator-dockercfg-7lnqk" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.531904 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.533326 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-config" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.675202 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-dockercfg-vw8fw" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.722737 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"default-dockercfg-chnjx" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.814533 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"hostpath-provisioner"/"openshift-service-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.821828 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"serving-cert" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.893453 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"openshift-service-ca.crt" Jan 20 11:09:36 crc kubenswrapper[4725]: I0120 11:09:36.933796 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ancillary-tools-dockercfg-vnmsz" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.036758 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"openshift-service-ca.crt" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.129709 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-node-identity"/"network-node-identity-cert" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.137222 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-service-ca-bundle" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.177405 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"openshift-service-ca.crt" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.219270 4725 reflector.go:368] Caches populated for *v1.CSIDriver from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.627955 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-dockercfg-qt55r" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.829617 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"openshift-service-ca.crt" Jan 20 11:09:37 crc kubenswrapper[4725]: I0120 11:09:37.904451 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"signing-cabundle" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.048922 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-dockercfg-f62pw" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.151959 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"multus-daemon-config" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.195148 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-dockercfg-zdk86" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.195380 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-tls" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.195630 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca"/"service-ca-dockercfg-pn86c" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.423572 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"openshift-kube-scheduler-operator-config" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.508640 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"etcd-client" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.509349 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"kube-root-ca.crt" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.509676 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"service-ca-bundle" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.608720 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.651437 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"default-dockercfg-2llfx" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.655248 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-dockercfg-jwfmh" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.714707 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"serving-cert" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.748806 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-config" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.755141 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-tls" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.778749 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"kube-root-ca.crt" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.807516 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"pprof-cert" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.840013 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-rbac-proxy" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.856716 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"env-overrides" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.872696 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"serving-cert" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.883223 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-tls" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.957114 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator"/"kube-storage-version-migrator-sa-dockercfg-5xfcg" Jan 20 11:09:38 crc kubenswrapper[4725]: I0120 11:09:38.961829 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"openshift-service-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.061593 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"kube-root-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.150063 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"serving-cert" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.198701 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-console"/"networking-console-plugin" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.257924 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"packageserver-service-cert" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.287269 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"multus-ac-dockercfg-9lkdf" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.291450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-storage-version-migrator-operator"/"kube-storage-version-migrator-operator-dockercfg-2bh8d" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.322359 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.342534 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"env-overrides" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.503964 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"openshift-service-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.599344 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager-operator"/"openshift-controller-manager-operator-serving-cert" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.621385 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"node-bootstrapper-token" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.743647 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"kube-root-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.766395 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-dockercfg-gkqpw" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.780346 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.795130 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.795717 4725 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/kube-controller-manager namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" start-of-body= Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.807479 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" probeResult="failure" output="Get \"https://192.168.126.11:10257/healthz\": dial tcp 192.168.126.11:10257: connect: connection refused" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.797431 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"machine-config-operator-images" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.807981 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.809366 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="kube-controller-manager" containerStatusID={"Type":"cri-o","ID":"e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1"} pod="openshift-kube-controller-manager/kube-controller-manager-crc" containerMessage="Container kube-controller-manager failed startup probe, will be restarted" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.809588 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="f614b9022728cf315e60c057852e563e" containerName="kube-controller-manager" containerID="cri-o://e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1" gracePeriod=30 Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.817723 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"kube-root-ca.crt" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.857729 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"etcd-serving-ca" Jan 20 11:09:39 crc kubenswrapper[4725]: I0120 11:09:39.968814 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-certs-default" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.004299 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-samples-operator"/"samples-operator-tls" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.120252 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-node-dockercfg-pwtwl" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.173382 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"kube-root-ca.crt" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.220768 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"service-ca-operator-config" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.306118 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"kube-root-ca.crt" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.371984 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"catalog-operator-serving-cert" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.391545 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"node-resolver-dockercfg-kz9s7" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.503714 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"trusted-ca" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.503987 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"service-ca-bundle" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.503906 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"ovnkube-identity-cm" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.602178 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication-operator"/"authentication-operator-dockercfg-mz9bj" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.605384 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-dockercfg-x57mr" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.608387 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-node-metrics-cert" Jan 20 11:09:40 crc kubenswrapper[4725]: I0120 11:09:40.612343 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"trusted-ca-bundle" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.010490 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-sa-dockercfg-d427c" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.014545 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-etcd-operator"/"etcd-operator-serving-cert" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.014557 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"oauth-apiserver-sa-dockercfg-6r2bq" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.014736 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.027162 4725 reflector.go:368] Caches populated for *v1.Pod from pkg/kubelet/config/apiserver.go:66 Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032054 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-authentication/oauth-openshift-558db77b4-lhx4z","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032132 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-cxhjt","openshift-kube-apiserver/kube-apiserver-crc"] Jan 20 11:09:41 crc kubenswrapper[4725]: E0120 11:09:41.032425 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032450 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" Jan 20 11:09:41 crc kubenswrapper[4725]: E0120 11:09:41.032487 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9d51d3df-3326-410b-b913-a269f46bb674" containerName="installer" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032502 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9d51d3df-3326-410b-b913-a269f46bb674" containerName="installer" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032622 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" containerName="oauth-openshift" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.032643 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9d51d3df-3326-410b-b913-a269f46bb674" containerName="installer" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.033170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.035888 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-provider-selection" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.038147 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.039161 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-service-ca" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.042207 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.042729 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-session" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.042913 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043162 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"audit" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043194 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-error" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"package-server-manager-serving-cert" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043872 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"oauth-openshift-dockercfg-znhcc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.043874 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-cliconfig" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.044245 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-serving-cert" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.044601 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-idp-0-file-data" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.044774 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-router-certs" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.054890 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication"/"v4-0-config-system-trusted-ca-bundle" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.057542 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-user-template-login" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.066273 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-authentication"/"v4-0-config-system-ocp-branding-template" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.070208 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=22.070190026 podStartE2EDuration="22.070190026s" podCreationTimestamp="2026-01-20 11:09:19 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:09:41.067007726 +0000 UTC m=+309.275329709" watchObservedRunningTime="2026-01-20 11:09:41.070190026 +0000 UTC m=+309.278511999" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.096817 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.107863 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.107986 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108052 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108152 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108222 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108283 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108319 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108347 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wc9z5\" (UniqueName: \"kubernetes.io/projected/74629c1f-0986-4d9f-bdd4-3c0672715065-kube-api-access-wc9z5\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108416 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108456 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-policies\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.108522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-dir\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.152413 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-policies\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209352 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-dir\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209471 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-dir\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209485 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209628 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209776 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209828 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.209913 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210128 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210214 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210297 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210370 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210449 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wc9z5\" (UniqueName: \"kubernetes.io/projected/74629c1f-0986-4d9f-bdd4-3c0672715065-kube-api-access-wc9z5\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.210548 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.211649 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-cliconfig\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.211950 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-audit-policies\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.212536 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-service-ca\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.215343 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.215707 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.216394 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-serving-cert\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.216477 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.216920 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-router-certs\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.217295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-error\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.219183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-login\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.219741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-session\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.224327 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.227854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/74629c1f-0986-4d9f-bdd4-3c0672715065-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.228150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wc9z5\" (UniqueName: \"kubernetes.io/projected/74629c1f-0986-4d9f-bdd4-3c0672715065-kube-api-access-wc9z5\") pod \"oauth-openshift-575cc5b957-cxhjt\" (UID: \"74629c1f-0986-4d9f-bdd4-3c0672715065\") " pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.260231 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-dockercfg-qx5rd" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.292166 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-oauth-config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.318876 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"cni-copy-resources" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.351000 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.383758 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.411134 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-config-operator"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.422821 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"audit-1" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.480823 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"machine-approver-config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.688404 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"console-operator-config" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.696722 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.768582 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-node-identity"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.800308 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-575cc5b957-cxhjt"] Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.847049 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"openshift-service-ca.crt" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.951294 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.951345 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.957045 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:41 crc kubenswrapper[4725]: I0120 11:09:41.993056 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.002904 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns"/"dns-default" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.022626 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" event={"ID":"74629c1f-0986-4d9f-bdd4-3c0672715065","Type":"ContainerStarted","Data":"03145731ffb8eb9c63ff5569a81f25a7f2b68611beacd61f4ce3f7fc363299cf"} Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.027708 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.082825 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-config-operator"/"openshift-config-operator-dockercfg-7pc5z" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.315953 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.316300 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.317495 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-version"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.328288 4725 reflector.go:368] Caches populated for *v1.Secret from object-"hostpath-provisioner"/"csi-hostpath-provisioner-sa-dockercfg-qd74k" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.334302 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"service-ca" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.404980 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-storage-version-migrator-operator"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.423896 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.440872 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.441322 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-canary"/"canary-serving-cert" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.493254 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.541236 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"iptables-alerter-script" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.569791 4725 kubelet.go:2431] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.570076 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" containerID="cri-o://6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" gracePeriod=5 Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.723510 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"etcd-client" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.835963 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"trusted-ca" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.836752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-scheduler-operator"/"kube-scheduler-operator-serving-cert" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.846554 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-oauth-apiserver"/"encryption-config-1" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.859119 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca"/"openshift-service-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.942446 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9a6106c0-75fa-4285-bc23-06ced58cf133" path="/var/lib/kubelet/pods/9a6106c0-75fa-4285-bc23-06ced58cf133/volumes" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.956416 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"kube-root-ca.crt" Jan 20 11:09:42 crc kubenswrapper[4725]: I0120 11:09:42.967528 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.003526 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"kube-root-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.028992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575cc5b957-cxhjt_74629c1f-0986-4d9f-bdd4-3c0672715065/oauth-openshift/0.log" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.029042 4725 generic.go:334] "Generic (PLEG): container finished" podID="74629c1f-0986-4d9f-bdd4-3c0672715065" containerID="71d543bb382de7054da3bd8531a4cccaf979889db9ef36e5eb2c9452a7637aec" exitCode=255 Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.029128 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" event={"ID":"74629c1f-0986-4d9f-bdd4-3c0672715065","Type":"ContainerDied","Data":"71d543bb382de7054da3bd8531a4cccaf979889db9ef36e5eb2c9452a7637aec"} Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.029928 4725 scope.go:117] "RemoveContainer" containerID="71d543bb382de7054da3bd8531a4cccaf979889db9ef36e5eb2c9452a7637aec" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.123469 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-controller-dockercfg-c2lfx" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.216318 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"cluster-image-registry-operator-dockercfg-m4qtx" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.264944 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.310630 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-service-ca-operator"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.328112 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-diagnostics"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.550796 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-network-console"/"networking-console-plugin-cert" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.582449 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-operator"/"kube-root-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.583646 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-daemon-dockercfg-r5tcq" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.678249 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"authentication-operator-config" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.781699 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console-operator"/"serving-cert" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.829724 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-samples-operator"/"openshift-service-ca.crt" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.868231 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"trusted-ca" Jan 20 11:09:43 crc kubenswrapper[4725]: I0120 11:09:43.954227 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"trusted-ca-bundle" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.035550 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-authentication_oauth-openshift-575cc5b957-cxhjt_74629c1f-0986-4d9f-bdd4-3c0672715065/oauth-openshift/0.log" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.036993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" event={"ID":"74629c1f-0986-4d9f-bdd4-3c0672715065","Type":"ContainerStarted","Data":"519957c6b156057462815387b3f634d6978553198e161b60042b4c24c13cc669"} Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.037320 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.042783 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.046669 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"olm-operator-serving-cert" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.069744 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-575cc5b957-cxhjt" podStartSLOduration=73.069725886 podStartE2EDuration="1m13.069725886s" podCreationTimestamp="2026-01-20 11:08:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:09:44.066422532 +0000 UTC m=+312.274744525" watchObservedRunningTime="2026-01-20 11:09:44.069725886 +0000 UTC m=+312.278047859" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.113132 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"console-config" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.130966 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"kube-root-ca.crt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.236609 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-console"/"console-serving-cert" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.298759 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-scheduler-operator"/"kube-root-ca.crt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.317681 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns-operator"/"dns-operator-dockercfg-9mqw5" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.434922 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-dns"/"dns-default-metrics-tls" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.449202 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.541077 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-cluster-machine-approver"/"openshift-service-ca.crt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.575895 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"control-plane-machine-set-operator-dockercfg-k9rxt" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.768831 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress-operator"/"metrics-tls" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.795164 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ovn-kubernetes"/"ovnkube-script-lib" Jan 20 11:09:44 crc kubenswrapper[4725]: I0120 11:09:44.825913 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-serving-cert" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.052649 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver-operator"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.172258 4725 reflector.go:368] Caches populated for *v1.Node from k8s.io/client-go/informers/factory.go:160 Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.214383 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"installation-pull-secrets" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.265519 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-operator-tls" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.453532 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.503180 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-server-tls" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.519394 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-oauth-apiserver"/"audit-1" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.546806 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-multus"/"metrics-daemon-secret" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.607502 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-config-operator"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.645110 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.763817 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"kube-root-ca.crt" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.770846 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"proxy-tls" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.819362 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-metrics" Jan 20 11:09:45 crc kubenswrapper[4725]: I0120 11:09:45.937453 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"etcd-serving-ca" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.025936 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"image-registry-certificates" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.274722 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress-canary"/"kube-root-ca.crt" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.361799 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-dns-operator"/"kube-root-ca.crt" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.413184 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"node-ca-dockercfg-4777p" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.582951 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console"/"oauth-serving-cert" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.644047 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-etcd-operator"/"etcd-operator-config" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.777732 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-apiserver"/"serving-cert" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.820763 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mco-proxy-tls" Jan 20 11:09:46 crc kubenswrapper[4725]: I0120 11:09:46.968163 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-ingress"/"openshift-service-ca.crt" Jan 20 11:09:48 crc kubenswrapper[4725]: I0120 11:09:48.983105 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 11:09:48 crc kubenswrapper[4725]: I0120 11:09:48.983649 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063380 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f85e55b1a89d02b0cb034b1ea31ed45a/startup-monitor/0.log" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063469 4725 generic.go:334] "Generic (PLEG): container finished" podID="f85e55b1a89d02b0cb034b1ea31ed45a" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" exitCode=137 Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063545 4725 scope.go:117] "RemoveContainer" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.063797 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107023 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107176 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107224 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107242 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") pod \"f85e55b1a89d02b0cb034b1ea31ed45a\" (UID: \"f85e55b1a89d02b0cb034b1ea31ed45a\") " Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107547 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log" (OuterVolumeSpecName: "var-log") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock" (OuterVolumeSpecName: "var-lock") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.107714 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests" (OuterVolumeSpecName: "manifests") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.110173 4725 scope.go:117] "RemoveContainer" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" Jan 20 11:09:49 crc kubenswrapper[4725]: E0120 11:09:49.110719 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b\": container with ID starting with 6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b not found: ID does not exist" containerID="6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.110841 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b"} err="failed to get container status \"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b\": rpc error: code = NotFound desc = could not find container \"6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b\": container with ID starting with 6d710284dc38193c8da82c80d14e0b0bfa14fc6378f2a4e5802649c4a1e5052b not found: ID does not exist" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.116368 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f85e55b1a89d02b0cb034b1ea31ed45a" (UID: "f85e55b1a89d02b0cb034b1ea31ed45a"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209013 4725 reconciler_common.go:293] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209061 4725 reconciler_common.go:293] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-lock\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209073 4725 reconciler_common.go:293] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-manifests\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209101 4725 reconciler_common.go:293] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:49 crc kubenswrapper[4725]: I0120 11:09:49.209111 4725 reconciler_common.go:293] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f85e55b1a89d02b0cb034b1ea31ed45a-var-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:09:50 crc kubenswrapper[4725]: I0120 11:09:50.938413 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" path="/var/lib/kubelet/pods/f85e55b1a89d02b0cb034b1ea31ed45a/volumes" Jan 20 11:09:56 crc kubenswrapper[4725]: I0120 11:09:56.704653 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-machine-api"/"kube-rbac-proxy" Jan 20 11:09:57 crc kubenswrapper[4725]: I0120 11:09:57.125198 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-service-ca-operator"/"service-ca-operator-dockercfg-rg9jl" Jan 20 11:09:57 crc kubenswrapper[4725]: I0120 11:09:57.197329 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-multus"/"openshift-service-ca.crt" Jan 20 11:09:59 crc kubenswrapper[4725]: I0120 11:09:59.774530 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"openshift-service-ca.crt" Jan 20 11:10:00 crc kubenswrapper[4725]: I0120 11:10:00.271970 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"mcc-proxy-tls" Jan 20 11:10:00 crc kubenswrapper[4725]: I0120 11:10:00.371991 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"trusted-ca-bundle" Jan 20 11:10:01 crc kubenswrapper[4725]: I0120 11:10:01.562947 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-image-registry"/"openshift-service-ca.crt" Jan 20 11:10:02 crc kubenswrapper[4725]: I0120 11:10:02.599482 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-console-operator"/"openshift-service-ca.crt" Jan 20 11:10:02 crc kubenswrapper[4725]: I0120 11:10:02.685830 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-authentication-operator"/"openshift-service-ca.crt" Jan 20 11:10:03 crc kubenswrapper[4725]: I0120 11:10:03.672187 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-cluster-machine-approver"/"machine-approver-sa-dockercfg-nl2j4" Jan 20 11:10:04 crc kubenswrapper[4725]: I0120 11:10:04.324585 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-controller-manager-operator-config" Jan 20 11:10:05 crc kubenswrapper[4725]: I0120 11:10:05.737998 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"marketplace-operator-dockercfg-5nsgg" Jan 20 11:10:07 crc kubenswrapper[4725]: I0120 11:10:07.828728 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-marketplace"/"kube-root-ca.crt" Jan 20 11:10:08 crc kubenswrapper[4725]: I0120 11:10:08.487034 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 11:10:10 crc kubenswrapper[4725]: I0120 11:10:10.074018 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-api"/"machine-api-operator-dockercfg-mfbb7" Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.256392 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258581 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/0.log" Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258651 4725 generic.go:334] "Generic (PLEG): container finished" podID="f614b9022728cf315e60c057852e563e" containerID="e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1" exitCode=137 Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258698 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerDied","Data":"e3bbacf2d7780b8573824d001346405f0420c7d380342fbe71ab39458535d5e1"} Jan 20 11:10:11 crc kubenswrapper[4725]: I0120 11:10:11.258746 4725 scope.go:117] "RemoveContainer" containerID="bd35f4a530a1f00276d2d45f4dd398be49261ec5400f6a39ce2e8b8f4d0b0e6b" Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.008911 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"image-registry-tls" Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.267033 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_f614b9022728cf315e60c057852e563e/kube-controller-manager/1.log" Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.268519 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"f614b9022728cf315e60c057852e563e","Type":"ContainerStarted","Data":"419df3a758f9387d9c10937abbec55a1db175c3d47cba10ac5d6f26113c8f2a1"} Jan 20 11:10:12 crc kubenswrapper[4725]: I0120 11:10:12.626330 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ingress"/"router-metrics-certs-default" Jan 20 11:10:13 crc kubenswrapper[4725]: I0120 11:10:13.071752 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:10:14 crc kubenswrapper[4725]: I0120 11:10:14.396958 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-apiserver"/"image-import-ca" Jan 20 11:10:14 crc kubenswrapper[4725]: I0120 11:10:14.925961 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:18 crc kubenswrapper[4725]: I0120 11:10:18.353355 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-ovn-kubernetes"/"ovn-kubernetes-control-plane-dockercfg-gs7dd" Jan 20 11:10:18 crc kubenswrapper[4725]: I0120 11:10:18.616913 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-machine-config-operator"/"machine-config-operator-dockercfg-98p87" Jan 20 11:10:19 crc kubenswrapper[4725]: I0120 11:10:19.793363 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:19 crc kubenswrapper[4725]: I0120 11:10:19.799059 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:21 crc kubenswrapper[4725]: I0120 11:10:21.530623 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-apiserver-operator"/"kube-apiserver-operator-config" Jan 20 11:10:23 crc kubenswrapper[4725]: I0120 11:10:23.069988 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-network-operator"/"kube-root-ca.crt" Jan 20 11:10:24 crc kubenswrapper[4725]: I0120 11:10:24.930015 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 20 11:10:26 crc kubenswrapper[4725]: I0120 11:10:26.728407 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:10:26 crc kubenswrapper[4725]: I0120 11:10:26.728478 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:10:27 crc kubenswrapper[4725]: I0120 11:10:27.310601 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-image-registry"/"registry-dockercfg-kzzsd" Jan 20 11:10:28 crc kubenswrapper[4725]: I0120 11:10:28.433761 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-kube-controller-manager-operator"/"kube-root-ca.crt" Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.315376 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.315969 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" containerID="cri-o://ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" gracePeriod=30 Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.405286 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.405537 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" containerID="cri-o://0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" gracePeriod=30 Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.898698 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:10:30 crc kubenswrapper[4725]: I0120 11:10:30.904617 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.071962 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.072545 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.072951 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca" (OuterVolumeSpecName: "client-ca") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073794 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073849 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.074369 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.073935 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") pod \"600286e6-beb3-40f1-9077-9c8abf34d55a\" (UID: \"600286e6-beb3-40f1-9077-9c8abf34d55a\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.074950 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") pod \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\" (UID: \"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39\") " Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.074892 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config" (OuterVolumeSpecName: "config") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075456 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075492 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075513 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.075978 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca" (OuterVolumeSpecName: "client-ca") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.076915 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config" (OuterVolumeSpecName: "config") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7" (OuterVolumeSpecName: "kube-api-access-7kmh7") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "kube-api-access-7kmh7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079125 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "600286e6-beb3-40f1-9077-9c8abf34d55a" (UID: "600286e6-beb3-40f1-9077-9c8abf34d55a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079337 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q" (OuterVolumeSpecName: "kube-api-access-87v9q") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "kube-api-access-87v9q". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.079552 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" (UID: "eb4612ff-dcf7-4e19-af27-fb8b3b54ce39"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176823 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/600286e6-beb3-40f1-9077-9c8abf34d55a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176867 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176888 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176905 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7kmh7\" (UniqueName: \"kubernetes.io/projected/600286e6-beb3-40f1-9077-9c8abf34d55a-kube-api-access-7kmh7\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176923 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/600286e6-beb3-40f1-9077-9c8abf34d55a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.176939 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-87v9q\" (UniqueName: \"kubernetes.io/projected/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39-kube-api-access-87v9q\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437303 4725 generic.go:334] "Generic (PLEG): container finished" podID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" exitCode=0 Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437381 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerDied","Data":"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" event={"ID":"600286e6-beb3-40f1-9077-9c8abf34d55a","Type":"ContainerDied","Data":"ade77836dcd269f9c5de0b97ad651f7a735e267f67b9c6aa9acfc5f72e48f82f"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437408 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.437430 4725 scope.go:117] "RemoveContainer" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440548 4725 generic.go:334] "Generic (PLEG): container finished" podID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" exitCode=0 Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440588 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerDied","Data":"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" event={"ID":"eb4612ff-dcf7-4e19-af27-fb8b3b54ce39","Type":"ContainerDied","Data":"3aabb471cfdad379863cc9d4e63ad21b453b02806d14a79762da7bd36f235094"} Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.440555 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-879f6c89f-r5qmp" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.463564 4725 scope.go:117] "RemoveContainer" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" Jan 20 11:10:31 crc kubenswrapper[4725]: E0120 11:10:31.464550 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a\": container with ID starting with 0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a not found: ID does not exist" containerID="0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.464611 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a"} err="failed to get container status \"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a\": rpc error: code = NotFound desc = could not find container \"0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a\": container with ID starting with 0ec30deae86a7a8ddfff86819ddc6f4f12f6f2fd12406fe866d400de06f3575a not found: ID does not exist" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.464648 4725 scope.go:117] "RemoveContainer" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.475709 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.483436 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6576b87f9c-lwhzw"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.489754 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.494400 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-879f6c89f-r5qmp"] Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.506627 4725 scope.go:117] "RemoveContainer" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" Jan 20 11:10:31 crc kubenswrapper[4725]: E0120 11:10:31.507345 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053\": container with ID starting with ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053 not found: ID does not exist" containerID="ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053" Jan 20 11:10:31 crc kubenswrapper[4725]: I0120 11:10:31.507398 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053"} err="failed to get container status \"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053\": rpc error: code = NotFound desc = could not find container \"ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053\": container with ID starting with ea963c578918bf4bf99332528fc4e04f7b319045506af3111dcaf31e6c185053 not found: ID does not exist" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.364846 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:32 crc kubenswrapper[4725]: E0120 11:10:32.365420 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365443 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: E0120 11:10:32.365478 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365492 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: E0120 11:10:32.365502 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365509 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365639 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f85e55b1a89d02b0cb034b1ea31ed45a" containerName="startup-monitor" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365655 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" containerName="route-controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.365669 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" containerName="controller-manager" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.366362 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.371135 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.372564 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.372748 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.373898 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.373925 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.374153 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.376565 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.376695 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.378688 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.379830 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.380455 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.381002 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.381903 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.383420 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.383741 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.393148 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.423775 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499585 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499838 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499869 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499903 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.499946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500190 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500372 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.500429 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601453 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601519 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601563 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601620 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601653 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601675 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601701 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.601753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.603021 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.604208 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.610377 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.610411 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.610757 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.611158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.611669 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.626016 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"controller-manager-867865d494-fqfz6\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.627212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"route-controller-manager-6f58f7659d-bc64g\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.712365 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.727804 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.931298 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.949530 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="600286e6-beb3-40f1-9077-9c8abf34d55a" path="/var/lib/kubelet/pods/600286e6-beb3-40f1-9077-9c8abf34d55a/volumes" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.951107 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb4612ff-dcf7-4e19-af27-fb8b3b54ce39" path="/var/lib/kubelet/pods/eb4612ff-dcf7-4e19-af27-fb8b3b54ce39/volumes" Jan 20 11:10:32 crc kubenswrapper[4725]: I0120 11:10:32.976339 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:32 crc kubenswrapper[4725]: W0120 11:10:32.983488 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9639e6c8_b710_4924_83fd_88fddbc3685a.slice/crio-634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2 WatchSource:0}: Error finding container 634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2: Status 404 returned error can't find the container with id 634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2 Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.458895 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerStarted","Data":"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.458967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerStarted","Data":"d2641cc78f4a233f2cdc04c78b092c86c19dcb9f83e6fab7f6cf33b38f6cf72a"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.458984 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.460602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerStarted","Data":"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.460649 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerStarted","Data":"634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2"} Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.460796 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.497631 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" podStartSLOduration=3.497610682 podStartE2EDuration="3.497610682s" podCreationTimestamp="2026-01-20 11:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:33.490111154 +0000 UTC m=+361.698433127" watchObservedRunningTime="2026-01-20 11:10:33.497610682 +0000 UTC m=+361.705932655" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.499431 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:33 crc kubenswrapper[4725]: I0120 11:10:33.940740 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" podStartSLOduration=3.940721367 podStartE2EDuration="3.940721367s" podCreationTimestamp="2026-01-20 11:10:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:33.841273357 +0000 UTC m=+362.049595340" watchObservedRunningTime="2026-01-20 11:10:33.940721367 +0000 UTC m=+362.149043340" Jan 20 11:10:34 crc kubenswrapper[4725]: I0120 11:10:34.209716 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.074896 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.108375 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.625639 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" containerID="cri-o://0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" gracePeriod=30 Jan 20 11:10:36 crc kubenswrapper[4725]: I0120 11:10:36.625598 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" containerID="cri-o://94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" gracePeriod=30 Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.046694 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.051113 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162852 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162910 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162953 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.162979 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163002 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163037 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163104 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163141 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") pod \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\" (UID: \"95b59a23-ecd6-4f96-bf93-ffc1efdefc25\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.163158 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") pod \"9639e6c8-b710-4924-83fd-88fddbc3685a\" (UID: \"9639e6c8-b710-4924-83fd-88fddbc3685a\") " Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164193 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca" (OuterVolumeSpecName: "client-ca") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca" (OuterVolumeSpecName: "client-ca") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164765 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config" (OuterVolumeSpecName: "config") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.164858 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config" (OuterVolumeSpecName: "config") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.165223 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.168422 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.169672 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.169721 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk" (OuterVolumeSpecName: "kube-api-access-rvnbk") pod "95b59a23-ecd6-4f96-bf93-ffc1efdefc25" (UID: "95b59a23-ecd6-4f96-bf93-ffc1efdefc25"). InnerVolumeSpecName "kube-api-access-rvnbk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.176229 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv" (OuterVolumeSpecName: "kube-api-access-qpfzv") pod "9639e6c8-b710-4924-83fd-88fddbc3685a" (UID: "9639e6c8-b710-4924-83fd-88fddbc3685a"). InnerVolumeSpecName "kube-api-access-qpfzv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.265518 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qpfzv\" (UniqueName: \"kubernetes.io/projected/9639e6c8-b710-4924-83fd-88fddbc3685a-kube-api-access-qpfzv\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.265992 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266037 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266061 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266101 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266127 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/9639e6c8-b710-4924-83fd-88fddbc3685a-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266143 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rvnbk\" (UniqueName: \"kubernetes.io/projected/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-kube-api-access-rvnbk\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266161 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/95b59a23-ecd6-4f96-bf93-ffc1efdefc25-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.266174 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/9639e6c8-b710-4924-83fd-88fddbc3685a-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.308719 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.308980 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.308998 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.309022 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309029 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309599 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerName="route-controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309623 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerName="controller-manager" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.309997 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.314328 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.314981 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.320012 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.330740 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366832 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366856 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366895 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366909 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366950 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366966 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.366993 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.468224 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.468775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.468991 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.469366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.469624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.469912 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470445 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470697 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.471893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.470460 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.472774 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.473497 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.474649 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.475682 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.478060 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.490831 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"route-controller-manager-6b5549788c-zlfnd\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.493874 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"controller-manager-7bdc655bf5-7czvl\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.623692 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632014 4725 generic.go:334] "Generic (PLEG): container finished" podID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" exitCode=0 Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632095 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632109 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerDied","Data":"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632142 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g" event={"ID":"95b59a23-ecd6-4f96-bf93-ffc1efdefc25","Type":"ContainerDied","Data":"d2641cc78f4a233f2cdc04c78b092c86c19dcb9f83e6fab7f6cf33b38f6cf72a"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632162 4725 scope.go:117] "RemoveContainer" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.632176 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.633959 4725 generic.go:334] "Generic (PLEG): container finished" podID="9639e6c8-b710-4924-83fd-88fddbc3685a" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" exitCode=0 Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.633985 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerDied","Data":"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.634004 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" event={"ID":"9639e6c8-b710-4924-83fd-88fddbc3685a","Type":"ContainerDied","Data":"634795ef5b82ae3c5466e830f01957e946ae9498de88c251ca057a67e42b3ab2"} Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.634052 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-867865d494-fqfz6" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.661966 4725 scope.go:117] "RemoveContainer" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.664240 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8\": container with ID starting with 0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8 not found: ID does not exist" containerID="0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.664291 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8"} err="failed to get container status \"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8\": rpc error: code = NotFound desc = could not find container \"0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8\": container with ID starting with 0be220cc1c1cc9eae7269715a59b5bc3c0f3f02f64c531e8d87477041fa6ebd8 not found: ID does not exist" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.664331 4725 scope.go:117] "RemoveContainer" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.664940 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.684921 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6f58f7659d-bc64g"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.691178 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.691288 4725 scope.go:117] "RemoveContainer" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" Jan 20 11:10:37 crc kubenswrapper[4725]: E0120 11:10:37.691767 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8\": container with ID starting with 94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8 not found: ID does not exist" containerID="94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.691810 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8"} err="failed to get container status \"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8\": rpc error: code = NotFound desc = could not find container \"94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8\": container with ID starting with 94a9ba9d87f1d523742fa5672918e7ce993d1a54e9de69365d52a4c0c811d5f8 not found: ID does not exist" Jan 20 11:10:37 crc kubenswrapper[4725]: I0120 11:10:37.696168 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-867865d494-fqfz6"] Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.031659 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.079722 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:10:38 crc kubenswrapper[4725]: W0120 11:10:38.089421 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8b8409bf_69df_4201_9a4b_e2462760929d.slice/crio-0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283 WatchSource:0}: Error finding container 0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283: Status 404 returned error can't find the container with id 0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283 Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.640960 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerStarted","Data":"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.641005 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerStarted","Data":"856c5e58827f572d795e8b9e0bf4456fad7a0ebce5396897689f34b89161e927"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.642140 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.644249 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerStarted","Data":"1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.644273 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerStarted","Data":"0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283"} Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.645097 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.648683 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.663835 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" podStartSLOduration=1.663820013 podStartE2EDuration="1.663820013s" podCreationTimestamp="2026-01-20 11:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:38.660295431 +0000 UTC m=+366.868617404" watchObservedRunningTime="2026-01-20 11:10:38.663820013 +0000 UTC m=+366.872141986" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.679641 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" podStartSLOduration=1.679625473 podStartE2EDuration="1.679625473s" podCreationTimestamp="2026-01-20 11:10:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:38.67794622 +0000 UTC m=+366.886268213" watchObservedRunningTime="2026-01-20 11:10:38.679625473 +0000 UTC m=+366.887947446" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.743198 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.938550 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="95b59a23-ecd6-4f96-bf93-ffc1efdefc25" path="/var/lib/kubelet/pods/95b59a23-ecd6-4f96-bf93-ffc1efdefc25/volumes" Jan 20 11:10:38 crc kubenswrapper[4725]: I0120 11:10:38.939577 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9639e6c8-b710-4924-83fd-88fddbc3685a" path="/var/lib/kubelet/pods/9639e6c8-b710-4924-83fd-88fddbc3685a/volumes" Jan 20 11:10:47 crc kubenswrapper[4725]: I0120 11:10:47.912283 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:10:47 crc kubenswrapper[4725]: I0120 11:10:47.912961 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-vbr29" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" containerID="cri-o://3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" gracePeriod=2 Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.322587 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.402058 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") pod \"247dcae1-930b-476d-abbe-f33c3da0730b\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.402205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") pod \"247dcae1-930b-476d-abbe-f33c3da0730b\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.402339 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") pod \"247dcae1-930b-476d-abbe-f33c3da0730b\" (UID: \"247dcae1-930b-476d-abbe-f33c3da0730b\") " Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.403140 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities" (OuterVolumeSpecName: "utilities") pod "247dcae1-930b-476d-abbe-f33c3da0730b" (UID: "247dcae1-930b-476d-abbe-f33c3da0730b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.415574 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8" (OuterVolumeSpecName: "kube-api-access-z8wq8") pod "247dcae1-930b-476d-abbe-f33c3da0730b" (UID: "247dcae1-930b-476d-abbe-f33c3da0730b"). InnerVolumeSpecName "kube-api-access-z8wq8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.487224 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "247dcae1-930b-476d-abbe-f33c3da0730b" (UID: "247dcae1-930b-476d-abbe-f33c3da0730b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.503916 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-z8wq8\" (UniqueName: \"kubernetes.io/projected/247dcae1-930b-476d-abbe-f33c3da0730b-kube-api-access-z8wq8\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.503985 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.504009 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/247dcae1-930b-476d-abbe-f33c3da0730b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701362 4725 generic.go:334] "Generic (PLEG): container finished" podID="247dcae1-930b-476d-abbe-f33c3da0730b" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" exitCode=0 Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701412 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3"} Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701448 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-vbr29" event={"ID":"247dcae1-930b-476d-abbe-f33c3da0730b","Type":"ContainerDied","Data":"01a79750127c09ea5c6dc20b661d6675fdb1d12c0c260ea3667e9b8f6125164f"} Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701470 4725 scope.go:117] "RemoveContainer" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.701604 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-vbr29" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.737322 4725 scope.go:117] "RemoveContainer" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.738217 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.747682 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-vbr29"] Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.766174 4725 scope.go:117] "RemoveContainer" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.786679 4725 scope.go:117] "RemoveContainer" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" Jan 20 11:10:48 crc kubenswrapper[4725]: E0120 11:10:48.787206 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3\": container with ID starting with 3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3 not found: ID does not exist" containerID="3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787249 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3"} err="failed to get container status \"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3\": rpc error: code = NotFound desc = could not find container \"3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3\": container with ID starting with 3fbdaac7a794f19e8c69f5422e6fb87dc9cdc634a039e858451ece6c43810bf3 not found: ID does not exist" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787286 4725 scope.go:117] "RemoveContainer" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" Jan 20 11:10:48 crc kubenswrapper[4725]: E0120 11:10:48.787547 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b\": container with ID starting with 6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b not found: ID does not exist" containerID="6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787575 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b"} err="failed to get container status \"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b\": rpc error: code = NotFound desc = could not find container \"6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b\": container with ID starting with 6af623c61d80b856a4684e07d31119222fc79288bbdf0ea9ec5edf1fb293fc6b not found: ID does not exist" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.787594 4725 scope.go:117] "RemoveContainer" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" Jan 20 11:10:48 crc kubenswrapper[4725]: E0120 11:10:48.788007 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72\": container with ID starting with 319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72 not found: ID does not exist" containerID="319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.788129 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72"} err="failed to get container status \"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72\": rpc error: code = NotFound desc = could not find container \"319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72\": container with ID starting with 319b868656c7fff6314303e2870aac558be49b56b40a78923c6a22ddb62ffc72 not found: ID does not exist" Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.796022 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.796396 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" containerID="cri-o://096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" gracePeriod=30 Jan 20 11:10:48 crc kubenswrapper[4725]: I0120 11:10:48.939948 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" path="/var/lib/kubelet/pods/247dcae1-930b-476d-abbe-f33c3da0730b/volumes" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.394105 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503720 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503793 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503923 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.503992 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") pod \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\" (UID: \"aef2156e-ea5d-4a60-83f6-8b7e79400a0f\") " Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.504958 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config" (OuterVolumeSpecName: "config") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.504943 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca" (OuterVolumeSpecName: "client-ca") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.516272 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.516345 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm" (OuterVolumeSpecName: "kube-api-access-cn9dm") pod "aef2156e-ea5d-4a60-83f6-8b7e79400a0f" (UID: "aef2156e-ea5d-4a60-83f6-8b7e79400a0f"). InnerVolumeSpecName "kube-api-access-cn9dm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605600 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605648 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605702 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.605715 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cn9dm\" (UniqueName: \"kubernetes.io/projected/aef2156e-ea5d-4a60-83f6-8b7e79400a0f-kube-api-access-cn9dm\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712141 4725 generic.go:334] "Generic (PLEG): container finished" podID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" exitCode=0 Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712207 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerDied","Data":"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded"} Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" event={"ID":"aef2156e-ea5d-4a60-83f6-8b7e79400a0f","Type":"ContainerDied","Data":"856c5e58827f572d795e8b9e0bf4456fad7a0ebce5396897689f34b89161e927"} Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712257 4725 scope.go:117] "RemoveContainer" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.712261 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.729946 4725 scope.go:117] "RemoveContainer" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" Jan 20 11:10:49 crc kubenswrapper[4725]: E0120 11:10:49.730752 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded\": container with ID starting with 096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded not found: ID does not exist" containerID="096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.730842 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded"} err="failed to get container status \"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded\": rpc error: code = NotFound desc = could not find container \"096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded\": container with ID starting with 096514e4cfb7d9bc2b8b1c83641b6e7525f7d9627f37004f7913d4331939aded not found: ID does not exist" Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.753160 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:49 crc kubenswrapper[4725]: I0120 11:10:49.753475 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b5549788c-zlfnd"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.309118 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.309730 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-78bg4" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" containerID="cri-o://4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784" gracePeriod=2 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.512887 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.513223 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-lxmdj" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" containerID="cri-o://d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48" gracePeriod=2 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.633802 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz"] Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634071 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-content" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634102 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-content" Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634124 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634132 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634143 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634149 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" Jan 20 11:10:50 crc kubenswrapper[4725]: E0120 11:10:50.634164 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-utilities" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634172 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="extract-utilities" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634281 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="247dcae1-930b-476d-abbe-f33c3da0730b" containerName="registry-server" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634293 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" containerName="route-controller-manager" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.634805 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.637623 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"serving-cert" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.637839 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"client-ca" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.638685 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"openshift-service-ca.crt" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.639069 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"kube-root-ca.crt" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.639343 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-route-controller-manager"/"route-controller-manager-sa-dockercfg-h2zr2" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.640506 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-route-controller-manager"/"config" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.647030 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz"] Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.725340 4725 generic.go:334] "Generic (PLEG): container finished" podID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerID="4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784" exitCode=0 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.725426 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784"} Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.727903 4725 generic.go:334] "Generic (PLEG): container finished" podID="39d02691-2128-45e8-841b-5bbf79e0a116" containerID="d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48" exitCode=0 Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.727937 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48"} Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-config\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889143 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5jhfv\" (UniqueName: \"kubernetes.io/projected/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-kube-api-access-5jhfv\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889196 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-client-ca\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.889216 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-serving-cert\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.937674 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aef2156e-ea5d-4a60-83f6-8b7e79400a0f" path="/var/lib/kubelet/pods/aef2156e-ea5d-4a60-83f6-8b7e79400a0f/volumes" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.990331 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-config\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.990518 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5jhfv\" (UniqueName: \"kubernetes.io/projected/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-kube-api-access-5jhfv\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.990637 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-client-ca\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.991582 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-serving-cert\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.992721 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-config\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.992229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-client-ca\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:50 crc kubenswrapper[4725]: I0120 11:10:50.997525 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-serving-cert\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.017809 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5jhfv\" (UniqueName: \"kubernetes.io/projected/d0b5252f-d628-4835-bf0d-a2ff0ccb14c4-kube-api-access-5jhfv\") pod \"route-controller-manager-7bcb5959-268hz\" (UID: \"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4\") " pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.072560 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.093890 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") pod \"4f648359-ab53-49a7-8f1a-77281c2bd53c\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.094140 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") pod \"4f648359-ab53-49a7-8f1a-77281c2bd53c\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.094207 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") pod \"4f648359-ab53-49a7-8f1a-77281c2bd53c\" (UID: \"4f648359-ab53-49a7-8f1a-77281c2bd53c\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.095429 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities" (OuterVolumeSpecName: "utilities") pod "4f648359-ab53-49a7-8f1a-77281c2bd53c" (UID: "4f648359-ab53-49a7-8f1a-77281c2bd53c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.099330 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl" (OuterVolumeSpecName: "kube-api-access-66ggl") pod "4f648359-ab53-49a7-8f1a-77281c2bd53c" (UID: "4f648359-ab53-49a7-8f1a-77281c2bd53c"). InnerVolumeSpecName "kube-api-access-66ggl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.178662 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195233 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") pod \"39d02691-2128-45e8-841b-5bbf79e0a116\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195333 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") pod \"39d02691-2128-45e8-841b-5bbf79e0a116\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195473 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") pod \"39d02691-2128-45e8-841b-5bbf79e0a116\" (UID: \"39d02691-2128-45e8-841b-5bbf79e0a116\") " Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195749 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-66ggl\" (UniqueName: \"kubernetes.io/projected/4f648359-ab53-49a7-8f1a-77281c2bd53c-kube-api-access-66ggl\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.195763 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.197597 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities" (OuterVolumeSpecName: "utilities") pod "39d02691-2128-45e8-841b-5bbf79e0a116" (UID: "39d02691-2128-45e8-841b-5bbf79e0a116"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.200887 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d" (OuterVolumeSpecName: "kube-api-access-d2n6d") pod "39d02691-2128-45e8-841b-5bbf79e0a116" (UID: "39d02691-2128-45e8-841b-5bbf79e0a116"). InnerVolumeSpecName "kube-api-access-d2n6d". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.230459 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "39d02691-2128-45e8-841b-5bbf79e0a116" (UID: "39d02691-2128-45e8-841b-5bbf79e0a116"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.239824 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "4f648359-ab53-49a7-8f1a-77281c2bd53c" (UID: "4f648359-ab53-49a7-8f1a-77281c2bd53c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.285482 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296665 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-d2n6d\" (UniqueName: \"kubernetes.io/projected/39d02691-2128-45e8-841b-5bbf79e0a116-kube-api-access-d2n6d\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296703 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/4f648359-ab53-49a7-8f1a-77281c2bd53c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296714 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.296722 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/39d02691-2128-45e8-841b-5bbf79e0a116-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.619654 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.734271 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" event={"ID":"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4","Type":"ContainerStarted","Data":"92b421569e49355818c265e4463cebaf6267eea5a055a89a0398f40dd35cafa0"} Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.736865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-lxmdj" event={"ID":"39d02691-2128-45e8-841b-5bbf79e0a116","Type":"ContainerDied","Data":"947644fa4cdb3ece3385cefa57c8a4ab47c9b07453257db4d816fb94806bf10c"} Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.736887 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-lxmdj" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.736921 4725 scope.go:117] "RemoveContainer" containerID="d2ab977fda92518444c1bc65215c64d5d47407bae017aeec5068d401607e4b48" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.740713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78bg4" event={"ID":"4f648359-ab53-49a7-8f1a-77281c2bd53c","Type":"ContainerDied","Data":"c8cf137c59938a71804fd93575de29dac65e3fbdae7d9616af8e1e0e425812c7"} Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.740855 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78bg4" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.759626 4725 scope.go:117] "RemoveContainer" containerID="5d88e1156fdd2131fb13a542776647afc695e341abc2d0bb759d85d523d36656" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.779064 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.781929 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-lxmdj"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.786274 4725 scope.go:117] "RemoveContainer" containerID="bef010ae40f12ebf94868b1a7f63b8c8ce98852cd1c4ccb364c0b676606ca709" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.793320 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.796434 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-78bg4"] Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.808379 4725 scope.go:117] "RemoveContainer" containerID="4ea88973e6bc5bfee3d9e0f84e8590574c6a997771e7bc383f740e105c2a7784" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.833186 4725 scope.go:117] "RemoveContainer" containerID="9ddffe79c18c43e0511499904feb4ce11970963d5b621ba51bd27f1e5c8b5059" Jan 20 11:10:51 crc kubenswrapper[4725]: I0120 11:10:51.850923 4725 scope.go:117] "RemoveContainer" containerID="06596abc1be5a61b774b86675bea7d758f393f271eafec99aee9e0618b84133b" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.748181 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" event={"ID":"d0b5252f-d628-4835-bf0d-a2ff0ccb14c4","Type":"ContainerStarted","Data":"ce9608d63883a216710a955470d15d5b6a6b43b3842886a25e3377acd9d6cd05"} Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.748500 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.753374 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.794207 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-7bcb5959-268hz" podStartSLOduration=4.794192841 podStartE2EDuration="4.794192841s" podCreationTimestamp="2026-01-20 11:10:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:52.768466915 +0000 UTC m=+380.976788888" watchObservedRunningTime="2026-01-20 11:10:52.794192841 +0000 UTC m=+381.002514814" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.940522 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" path="/var/lib/kubelet/pods/39d02691-2128-45e8-841b-5bbf79e0a116/volumes" Jan 20 11:10:52 crc kubenswrapper[4725]: I0120 11:10:52.941660 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" path="/var/lib/kubelet/pods/4f648359-ab53-49a7-8f1a-77281c2bd53c/volumes" Jan 20 11:10:56 crc kubenswrapper[4725]: I0120 11:10:56.727439 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:10:56 crc kubenswrapper[4725]: I0120 11:10:56.728230 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042470 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xs9z9"] Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042795 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042816 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042838 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042846 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042857 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042866 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042875 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042882 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042893 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042900 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-content" Jan 20 11:10:57 crc kubenswrapper[4725]: E0120 11:10:57.042912 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.042921 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="extract-utilities" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.043030 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="39d02691-2128-45e8-841b-5bbf79e0a116" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.043050 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="4f648359-ab53-49a7-8f1a-77281c2bd53c" containerName="registry-server" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.043603 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.056764 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xs9z9"] Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225542 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/877b47a7-ec29-4467-a0c7-a4561a12573b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225640 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-certificates\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225662 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-trusted-ca\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225695 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmjnf\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-kube-api-access-fmjnf\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/877b47a7-ec29-4467-a0c7-a4561a12573b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225748 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-tls\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.225786 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-bound-sa-token\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.275503 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327065 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-trusted-ca\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fmjnf\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-kube-api-access-fmjnf\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327208 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/877b47a7-ec29-4467-a0c7-a4561a12573b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327233 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-tls\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327253 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-bound-sa-token\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/877b47a7-ec29-4467-a0c7-a4561a12573b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.327323 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-certificates\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.328821 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-certificates\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.328849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/877b47a7-ec29-4467-a0c7-a4561a12573b-trusted-ca\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.329562 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/877b47a7-ec29-4467-a0c7-a4561a12573b-ca-trust-extracted\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.334609 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/877b47a7-ec29-4467-a0c7-a4561a12573b-installation-pull-secrets\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.334736 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-registry-tls\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.348610 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fmjnf\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-kube-api-access-fmjnf\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.358979 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/877b47a7-ec29-4467-a0c7-a4561a12573b-bound-sa-token\") pod \"image-registry-66df7c8f76-xs9z9\" (UID: \"877b47a7-ec29-4467-a0c7-a4561a12573b\") " pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.364224 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:57 crc kubenswrapper[4725]: I0120 11:10:57.854475 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66df7c8f76-xs9z9"] Jan 20 11:10:57 crc kubenswrapper[4725]: W0120 11:10:57.863474 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod877b47a7_ec29_4467_a0c7_a4561a12573b.slice/crio-4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed WatchSource:0}: Error finding container 4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed: Status 404 returned error can't find the container with id 4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.790772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" event={"ID":"877b47a7-ec29-4467-a0c7-a4561a12573b","Type":"ContainerStarted","Data":"42f5a1cb1e396971fd427e7f5b06701f7c76c63599e3407c4a255735d51ccbd3"} Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.791492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" event={"ID":"877b47a7-ec29-4467-a0c7-a4561a12573b","Type":"ContainerStarted","Data":"4766a4f1711cfd43998495f4f6a691a3602d660878a5f48a89b9f538dd4750ed"} Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.791529 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:10:58 crc kubenswrapper[4725]: I0120 11:10:58.815873 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" podStartSLOduration=1.815856248 podStartE2EDuration="1.815856248s" podCreationTimestamp="2026-01-20 11:10:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:10:58.81214836 +0000 UTC m=+387.020470343" watchObservedRunningTime="2026-01-20 11:10:58.815856248 +0000 UTC m=+387.024178221" Jan 20 11:11:17 crc kubenswrapper[4725]: I0120 11:11:17.369463 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66df7c8f76-xs9z9" Jan 20 11:11:17 crc kubenswrapper[4725]: I0120 11:11:17.428717 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.727872 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.729301 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.729414 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.730583 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:11:26 crc kubenswrapper[4725]: I0120 11:11:26.730729 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f" gracePeriod=600 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.747668 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.748212 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" containerID="cri-o://1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94" gracePeriod=30 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973726 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f" exitCode=0 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973800 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f"} Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b"} Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.973871 4725 scope.go:117] "RemoveContainer" containerID="1db8981f3ad820260f4de28e78ae0f55d0b702349131ef5bbc6fbef7c0a1a665" Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.975393 4725 generic.go:334] "Generic (PLEG): container finished" podID="8b8409bf-69df-4201-9a4b-e2462760929d" containerID="1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94" exitCode=0 Jan 20 11:11:27 crc kubenswrapper[4725]: I0120 11:11:27.975422 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerDied","Data":"1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94"} Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.190726 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.259990 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260044 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260062 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260098 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260127 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") pod \"8b8409bf-69df-4201-9a4b-e2462760929d\" (UID: \"8b8409bf-69df-4201-9a4b-e2462760929d\") " Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260954 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca" (OuterVolumeSpecName: "client-ca") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.260988 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.261026 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config" (OuterVolumeSpecName: "config") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.265185 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.265640 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f" (OuterVolumeSpecName: "kube-api-access-x957f") pod "8b8409bf-69df-4201-9a4b-e2462760929d" (UID: "8b8409bf-69df-4201-9a4b-e2462760929d"). InnerVolumeSpecName "kube-api-access-x957f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361253 4725 reconciler_common.go:293] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-client-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361301 4725 reconciler_common.go:293] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361531 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x957f\" (UniqueName: \"kubernetes.io/projected/8b8409bf-69df-4201-9a4b-e2462760929d-kube-api-access-x957f\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361548 4725 reconciler_common.go:293] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/8b8409bf-69df-4201-9a4b-e2462760929d-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.361561 4725 reconciler_common.go:293] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/8b8409bf-69df-4201-9a4b-e2462760929d-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.984477 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" event={"ID":"8b8409bf-69df-4201-9a4b-e2462760929d","Type":"ContainerDied","Data":"0c42ae7d99c9a5ac82f230b47c576c91ca86287412b5f4243c558a237edf5283"} Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.984512 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-7bdc655bf5-7czvl" Jan 20 11:11:28 crc kubenswrapper[4725]: I0120 11:11:28.984533 4725 scope.go:117] "RemoveContainer" containerID="1156333ce1cc5bea4181a414939a827a3a37c1560a43f861a6df6fd919476c94" Jan 20 11:11:29 crc kubenswrapper[4725]: I0120 11:11:29.009244 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:11:29 crc kubenswrapper[4725]: I0120 11:11:29.012740 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-7bdc655bf5-7czvl"] Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.338965 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-556d6dff97-md6hn"] Jan 20 11:11:30 crc kubenswrapper[4725]: E0120 11:11:30.340298 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.340437 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.340710 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" containerName="controller-manager" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.341498 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.347963 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"kube-root-ca.crt" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.348708 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-service-ca.crt" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.349031 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"serving-cert" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.349350 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"config" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.349747 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"client-ca" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.354209 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556d6dff97-md6hn"] Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.354742 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-controller-manager"/"openshift-controller-manager-sa-dockercfg-msq4c" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.356359 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-controller-manager"/"openshift-global-ca" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493436 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-proxy-ca-bundles\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493493 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-config\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493531 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sm27l\" (UniqueName: \"kubernetes.io/projected/b60a1413-98c5-44fe-ada4-9df9946861cd-kube-api-access-sm27l\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493570 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60a1413-98c5-44fe-ada4-9df9946861cd-serving-cert\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.493624 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-client-ca\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.594749 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-proxy-ca-bundles\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.594809 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-config\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.595510 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sm27l\" (UniqueName: \"kubernetes.io/projected/b60a1413-98c5-44fe-ada4-9df9946861cd-kube-api-access-sm27l\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.595568 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60a1413-98c5-44fe-ada4-9df9946861cd-serving-cert\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.596800 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-client-ca\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.597909 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-client-ca\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.600024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-config\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.607551 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b60a1413-98c5-44fe-ada4-9df9946861cd-proxy-ca-bundles\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.611458 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/b60a1413-98c5-44fe-ada4-9df9946861cd-serving-cert\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.622761 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sm27l\" (UniqueName: \"kubernetes.io/projected/b60a1413-98c5-44fe-ada4-9df9946861cd-kube-api-access-sm27l\") pod \"controller-manager-556d6dff97-md6hn\" (UID: \"b60a1413-98c5-44fe-ada4-9df9946861cd\") " pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.669911 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.878517 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-556d6dff97-md6hn"] Jan 20 11:11:30 crc kubenswrapper[4725]: W0120 11:11:30.887559 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb60a1413_98c5_44fe_ada4_9df9946861cd.slice/crio-b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9 WatchSource:0}: Error finding container b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9: Status 404 returned error can't find the container with id b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9 Jan 20 11:11:30 crc kubenswrapper[4725]: I0120 11:11:30.938008 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8b8409bf-69df-4201-9a4b-e2462760929d" path="/var/lib/kubelet/pods/8b8409bf-69df-4201-9a4b-e2462760929d/volumes" Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.336772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" event={"ID":"b60a1413-98c5-44fe-ada4-9df9946861cd","Type":"ContainerStarted","Data":"61c3300e12afe758a90625c8afe2eabc06fca38bc245748d9b5561034d5d4340"} Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.336827 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" event={"ID":"b60a1413-98c5-44fe-ada4-9df9946861cd","Type":"ContainerStarted","Data":"b3aca16cd6aacf9e368aa3f299a9f7fe88ab93bcf0f368e980daef87a021cee9"} Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.337606 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.341730 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" Jan 20 11:11:31 crc kubenswrapper[4725]: I0120 11:11:31.360617 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-556d6dff97-md6hn" podStartSLOduration=4.360593026 podStartE2EDuration="4.360593026s" podCreationTimestamp="2026-01-20 11:11:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:11:31.359173711 +0000 UTC m=+419.567495694" watchObservedRunningTime="2026-01-20 11:11:31.360593026 +0000 UTC m=+419.568914999" Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.865511 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.866472 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-6n4zh" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" containerID="cri-o://a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19" gracePeriod=30 Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.872397 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.872718 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-8pplm" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" containerID="cri-o://fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1" gracePeriod=30 Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.887093 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.888426 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" containerID="cri-o://4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488" gracePeriod=30 Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.889750 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:11:38 crc kubenswrapper[4725]: I0120 11:11:38.890147 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-c2jtp" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" containerID="cri-o://a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee" gracePeriod=30 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.050743 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.051207 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6nxjc" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" containerID="cri-o://7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806" gracePeriod=30 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.060972 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-htj9r"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.061695 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.075504 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-htj9r"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.165421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.165482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6pjt4\" (UniqueName: \"kubernetes.io/projected/5666b0dd-5364-4bee-a091-26fa796770cf-kube-api-access-6pjt4\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.165565 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.266846 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.266900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6pjt4\" (UniqueName: \"kubernetes.io/projected/5666b0dd-5364-4bee-a091-26fa796770cf-kube-api-access-6pjt4\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.266926 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.268053 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-trusted-ca\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.279158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/5666b0dd-5364-4bee-a091-26fa796770cf-marketplace-operator-metrics\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.292320 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6pjt4\" (UniqueName: \"kubernetes.io/projected/5666b0dd-5364-4bee-a091-26fa796770cf-kube-api-access-6pjt4\") pod \"marketplace-operator-79b997595-htj9r\" (UID: \"5666b0dd-5364-4bee-a091-26fa796770cf\") " pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.381007 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.393475 4725 generic.go:334] "Generic (PLEG): container finished" podID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerID="4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.393550 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerDied","Data":"4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.395922 4725 generic.go:334] "Generic (PLEG): container finished" podID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerID="a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.395978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.397691 4725 generic.go:334] "Generic (PLEG): container finished" podID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerID="a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.397736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.401084 4725 generic.go:334] "Generic (PLEG): container finished" podID="1ba77d4b-0178-4730-8869-389efdf58851" containerID="fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.401171 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.476895 4725 generic.go:334] "Generic (PLEG): container finished" podID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerID="7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806" exitCode=0 Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.476966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806"} Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.510287 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.681527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") pod \"502a4051-5a60-4e90-a3f2-7dc035950a9b\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.681640 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") pod \"502a4051-5a60-4e90-a3f2-7dc035950a9b\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.681696 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") pod \"502a4051-5a60-4e90-a3f2-7dc035950a9b\" (UID: \"502a4051-5a60-4e90-a3f2-7dc035950a9b\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.682976 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "502a4051-5a60-4e90-a3f2-7dc035950a9b" (UID: "502a4051-5a60-4e90-a3f2-7dc035950a9b"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.688463 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "502a4051-5a60-4e90-a3f2-7dc035950a9b" (UID: "502a4051-5a60-4e90-a3f2-7dc035950a9b"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.690649 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd" (OuterVolumeSpecName: "kube-api-access-qbmfd") pod "502a4051-5a60-4e90-a3f2-7dc035950a9b" (UID: "502a4051-5a60-4e90-a3f2-7dc035950a9b"). InnerVolumeSpecName "kube-api-access-qbmfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.733537 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.762166 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.785807 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.785838 4725 reconciler_common.go:293] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/502a4051-5a60-4e90-a3f2-7dc035950a9b-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.785848 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qbmfd\" (UniqueName: \"kubernetes.io/projected/502a4051-5a60-4e90-a3f2-7dc035950a9b-kube-api-access-qbmfd\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.886933 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") pod \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887028 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") pod \"10de7f77-2b14-4c56-b4db-ebb93422b89c\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887084 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") pod \"10de7f77-2b14-4c56-b4db-ebb93422b89c\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887134 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") pod \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887180 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") pod \"10de7f77-2b14-4c56-b4db-ebb93422b89c\" (UID: \"10de7f77-2b14-4c56-b4db-ebb93422b89c\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.887205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") pod \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\" (UID: \"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6\") " Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.888097 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities" (OuterVolumeSpecName: "utilities") pod "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" (UID: "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.888996 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities" (OuterVolumeSpecName: "utilities") pod "10de7f77-2b14-4c56-b4db-ebb93422b89c" (UID: "10de7f77-2b14-4c56-b4db-ebb93422b89c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.903622 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh" (OuterVolumeSpecName: "kube-api-access-fqcqh") pod "10de7f77-2b14-4c56-b4db-ebb93422b89c" (UID: "10de7f77-2b14-4c56-b4db-ebb93422b89c"). InnerVolumeSpecName "kube-api-access-fqcqh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.903674 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp" (OuterVolumeSpecName: "kube-api-access-k8ntp") pod "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" (UID: "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6"). InnerVolumeSpecName "kube-api-access-k8ntp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.917448 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "10de7f77-2b14-4c56-b4db-ebb93422b89c" (UID: "10de7f77-2b14-4c56-b4db-ebb93422b89c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.947496 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-htj9r"] Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988429 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988825 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/10de7f77-2b14-4c56-b4db-ebb93422b89c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988839 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k8ntp\" (UniqueName: \"kubernetes.io/projected/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-kube-api-access-k8ntp\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988863 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fqcqh\" (UniqueName: \"kubernetes.io/projected/10de7f77-2b14-4c56-b4db-ebb93422b89c-kube-api-access-fqcqh\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:39 crc kubenswrapper[4725]: I0120 11:11:39.988876 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.026690 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" (UID: "7865a54a-be9b-4a0a-8c84-b45c8bfe40e6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.062640 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.076178 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.095255 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195744 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") pod \"1ba77d4b-0178-4730-8869-389efdf58851\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195804 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") pod \"7ebdb343-11c1-4e64-9538-98ca4298b821\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195919 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") pod \"1ba77d4b-0178-4730-8869-389efdf58851\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") pod \"7ebdb343-11c1-4e64-9538-98ca4298b821\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.195965 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") pod \"1ba77d4b-0178-4730-8869-389efdf58851\" (UID: \"1ba77d4b-0178-4730-8869-389efdf58851\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.196006 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") pod \"7ebdb343-11c1-4e64-9538-98ca4298b821\" (UID: \"7ebdb343-11c1-4e64-9538-98ca4298b821\") " Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.196684 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities" (OuterVolumeSpecName: "utilities") pod "1ba77d4b-0178-4730-8869-389efdf58851" (UID: "1ba77d4b-0178-4730-8869-389efdf58851"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.197457 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities" (OuterVolumeSpecName: "utilities") pod "7ebdb343-11c1-4e64-9538-98ca4298b821" (UID: "7ebdb343-11c1-4e64-9538-98ca4298b821"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.201371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b" (OuterVolumeSpecName: "kube-api-access-m8h6b") pod "1ba77d4b-0178-4730-8869-389efdf58851" (UID: "1ba77d4b-0178-4730-8869-389efdf58851"). InnerVolumeSpecName "kube-api-access-m8h6b". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.201436 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6" (OuterVolumeSpecName: "kube-api-access-rkgp6") pod "7ebdb343-11c1-4e64-9538-98ca4298b821" (UID: "7ebdb343-11c1-4e64-9538-98ca4298b821"). InnerVolumeSpecName "kube-api-access-rkgp6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.257673 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "7ebdb343-11c1-4e64-9538-98ca4298b821" (UID: "7ebdb343-11c1-4e64-9538-98ca4298b821"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.266975 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "1ba77d4b-0178-4730-8869-389efdf58851" (UID: "1ba77d4b-0178-4730-8869-389efdf58851"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297480 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-m8h6b\" (UniqueName: \"kubernetes.io/projected/1ba77d4b-0178-4730-8869-389efdf58851-kube-api-access-m8h6b\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297549 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297561 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297575 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rkgp6\" (UniqueName: \"kubernetes.io/projected/7ebdb343-11c1-4e64-9538-98ca4298b821-kube-api-access-rkgp6\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297609 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/1ba77d4b-0178-4730-8869-389efdf58851-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.297618 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/7ebdb343-11c1-4e64-9538-98ca4298b821-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.484248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-8pplm" event={"ID":"1ba77d4b-0178-4730-8869-389efdf58851","Type":"ContainerDied","Data":"1a440377416e2e3be97cb4385521f0b527fd44fc3d296005eb3a6215b7798a51"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.484324 4725 scope.go:117] "RemoveContainer" containerID="fe1472cf74a505268aea085b9463dd11873df45aba71fba5af135317ac4193c1" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.484648 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-8pplm" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.486526 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6nxjc" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.486509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6nxjc" event={"ID":"7865a54a-be9b-4a0a-8c84-b45c8bfe40e6","Type":"ContainerDied","Data":"c444dae3d5ca85882553d57b5c52f2afebdc1ac865ea8fa27ac7b506e3700c60"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.488511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" event={"ID":"502a4051-5a60-4e90-a3f2-7dc035950a9b","Type":"ContainerDied","Data":"0b78375c7ed8f9916a58dd59c26f3043217b694c6d335a958edaddd11c21782a"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.488572 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-79b997595-tgvmj" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.493346 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" event={"ID":"5666b0dd-5364-4bee-a091-26fa796770cf","Type":"ContainerStarted","Data":"f558ce2a7eb158e666290dd96abad2a7f4f18a12319b0a69da2c71c8c5fcd386"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.493379 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" event={"ID":"5666b0dd-5364-4bee-a091-26fa796770cf","Type":"ContainerStarted","Data":"c1cf4501acbe7dd847f87b6a314b27e5232cecbe5d01451638e4494216fc8638"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.493596 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.496149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6n4zh" event={"ID":"7ebdb343-11c1-4e64-9538-98ca4298b821","Type":"ContainerDied","Data":"b3c438c94578ed127de08ab71e5b40caf95c66fe2d7a2b37a5e91dfd80db62be"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.496244 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6n4zh" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.501836 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.502104 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-c2jtp" event={"ID":"10de7f77-2b14-4c56-b4db-ebb93422b89c","Type":"ContainerDied","Data":"fbfff8e8818beecfb8c02cfbcbeb21c81754f2aeda1e021b3b81559a276b8a66"} Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.502251 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-c2jtp" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.514232 4725 scope.go:117] "RemoveContainer" containerID="95b3efd0e36287cff3884a1d24955133183f96b36b4ed22b901a472384a7ccb9" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.530837 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-79b997595-htj9r" podStartSLOduration=1.5308178369999998 podStartE2EDuration="1.530817837s" podCreationTimestamp="2026-01-20 11:11:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:11:40.527636537 +0000 UTC m=+428.735958510" watchObservedRunningTime="2026-01-20 11:11:40.530817837 +0000 UTC m=+428.739139810" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.556415 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.557994 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6nxjc"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.575133 4725 scope.go:117] "RemoveContainer" containerID="38beb6d6731fbc36ccb21ece2faf5cceb4d8191e98451bfd04d8127368937300" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.577632 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.581529 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-79b997595-tgvmj"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.591096 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.595672 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-c2jtp"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.597527 4725 scope.go:117] "RemoveContainer" containerID="7b79e336b8a9444cc9ab59a4ed8131e0eaf84cfc7a492f4b7e369343d81e1806" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.617205 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.626056 4725 scope.go:117] "RemoveContainer" containerID="6fc29a792c0b2bdbde59b088ffd262a4ab5cd2ba7cd161055d4ccd07f8587ee9" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.926035 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-6n4zh"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.947380 4725 scope.go:117] "RemoveContainer" containerID="9f5ff65ac43718d6c6a2cb0ff08d34aa44b3c5b853c8111fc5672b5c544f3567" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.961899 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" path="/var/lib/kubelet/pods/10de7f77-2b14-4c56-b4db-ebb93422b89c/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.963504 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" path="/var/lib/kubelet/pods/502a4051-5a60-4e90-a3f2-7dc035950a9b/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.966469 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" path="/var/lib/kubelet/pods/7865a54a-be9b-4a0a-8c84-b45c8bfe40e6/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.970207 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" path="/var/lib/kubelet/pods/7ebdb343-11c1-4e64-9538-98ca4298b821/volumes" Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.971753 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:11:40 crc kubenswrapper[4725]: I0120 11:11:40.980241 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-8pplm"] Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.007905 4725 scope.go:117] "RemoveContainer" containerID="4e8f7c705143e7b6c5cb6feb639a537dc020d95d17ca8baee39e25fc4da83488" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.071688 4725 scope.go:117] "RemoveContainer" containerID="a353b91011ed8a0053f12f280e559ea334c135ab3db3548126610c4f6e3cdf19" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.088134 4725 scope.go:117] "RemoveContainer" containerID="e298ffa53486948221219263d81f91dd0aaf57b63b66a788f8e75324e688da37" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.114548 4725 scope.go:117] "RemoveContainer" containerID="a8988d59128eab2f53f7dd920de01a7b98a3e4e952f90431883ff756e50dadbe" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.134878 4725 scope.go:117] "RemoveContainer" containerID="a8afec3445fd422a4b7356f38493af3a76b9139996d9b5d98e4135127e6f59ee" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.157700 4725 scope.go:117] "RemoveContainer" containerID="3aebd70372873b9fbd7b4e02c72fa5025a0936f55bfdb8b39fafb1a0022fe117" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.170858 4725 scope.go:117] "RemoveContainer" containerID="79b3dc2509427f8e48ea65515f6bd240f048253490613646e6daeff65ff41302" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.484582 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hht7w"] Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485165 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485269 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485392 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485458 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485531 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485599 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485672 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485735 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485799 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485870 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.485931 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.485988 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486044 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486128 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486204 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486267 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486324 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486377 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486440 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486496 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486556 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486610 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486661 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486730 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="extract-content" Jan 20 11:11:41 crc kubenswrapper[4725]: E0120 11:11:41.486798 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.486871 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="extract-utilities" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487016 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="10de7f77-2b14-4c56-b4db-ebb93422b89c" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487112 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7865a54a-be9b-4a0a-8c84-b45c8bfe40e6" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487204 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="502a4051-5a60-4e90-a3f2-7dc035950a9b" containerName="marketplace-operator" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487294 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="1ba77d4b-0178-4730-8869-389efdf58851" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.487375 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7ebdb343-11c1-4e64-9538-98ca4298b821" containerName="registry-server" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.488910 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.492493 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-operators-dockercfg-ct8rh" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.495484 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hht7w"] Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.632321 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-catalog-content\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.632366 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7hmzs\" (UniqueName: \"kubernetes.io/projected/2c4020a9-4953-4dee-8bc0-2329493c8b8a-kube-api-access-7hmzs\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.632404 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-utilities\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733301 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-catalog-content\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733360 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7hmzs\" (UniqueName: \"kubernetes.io/projected/2c4020a9-4953-4dee-8bc0-2329493c8b8a-kube-api-access-7hmzs\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733404 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-utilities\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.733828 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-utilities\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.734405 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/2c4020a9-4953-4dee-8bc0-2329493c8b8a-catalog-content\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.759006 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7hmzs\" (UniqueName: \"kubernetes.io/projected/2c4020a9-4953-4dee-8bc0-2329493c8b8a-kube-api-access-7hmzs\") pod \"redhat-operators-hht7w\" (UID: \"2c4020a9-4953-4dee-8bc0-2329493c8b8a\") " pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:41 crc kubenswrapper[4725]: I0120 11:11:41.809656 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.271806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hht7w"] Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.485443 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" containerID="cri-o://8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" gracePeriod=30 Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.523058 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c4020a9-4953-4dee-8bc0-2329493c8b8a" containerID="e8cb2acadf289125fec98b352d3572f1856b247139e042f8f95bfeab691ed4fa" exitCode=0 Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.523210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerDied","Data":"e8cb2acadf289125fec98b352d3572f1856b247139e042f8f95bfeab691ed4fa"} Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.523453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerStarted","Data":"b3e93984857ebda76d0640c08dbdcc80927d9e3c76e1309aec29f9914b93ba34"} Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.898814 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-6dzml"] Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.899933 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.902794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"certified-operators-dockercfg-4rs5g" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.916898 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.920884 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dzml"] Jan 20 11:11:42 crc kubenswrapper[4725]: I0120 11:11:42.941151 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1ba77d4b-0178-4730-8869-389efdf58851" path="/var/lib/kubelet/pods/1ba77d4b-0178-4730-8869-389efdf58851/volumes" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052483 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052709 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052769 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052812 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052860 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052925 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.052966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") pod \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\" (UID: \"cec62c65-a846-4cc0-bb51-01d2d70c4c85\") " Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.053134 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-utilities\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.053186 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-catalog-content\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.053229 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-blwm2\" (UniqueName: \"kubernetes.io/projected/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-kube-api-access-blwm2\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.054552 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.054846 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060013 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060048 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb" (OuterVolumeSpecName: "kube-api-access-5nmbb") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "kube-api-access-5nmbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.060355 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.063612 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8" (OuterVolumeSpecName: "registry-storage") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "pvc-657094db-63f1-4ba8-9a24-edca0e80b7a8". PluginName "kubernetes.io/csi", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.073218 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "cec62c65-a846-4cc0-bb51-01d2d70c4c85" (UID: "cec62c65-a846-4cc0-bb51-01d2d70c4c85"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.154825 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-utilities\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.154896 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-catalog-content\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.154934 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-blwm2\" (UniqueName: \"kubernetes.io/projected/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-kube-api-access-blwm2\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155029 4725 reconciler_common.go:293] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/cec62c65-a846-4cc0-bb51-01d2d70c4c85-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155045 4725 reconciler_common.go:293] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155057 4725 reconciler_common.go:293] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/cec62c65-a846-4cc0-bb51-01d2d70c4c85-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155070 4725 reconciler_common.go:293] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/cec62c65-a846-4cc0-bb51-01d2d70c4c85-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155194 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-5nmbb\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-kube-api-access-5nmbb\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155205 4725 reconciler_common.go:293] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155213 4725 reconciler_common.go:293] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/cec62c65-a846-4cc0-bb51-01d2d70c4c85-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155403 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-catalog-content\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.155523 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-utilities\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.171956 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-blwm2\" (UniqueName: \"kubernetes.io/projected/e1530fd1-1850-4d4f-b6a7-cc1784d9c399-kube-api-access-blwm2\") pod \"certified-operators-6dzml\" (UID: \"e1530fd1-1850-4d4f-b6a7-cc1784d9c399\") " pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.230152 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.530968 4725 generic.go:334] "Generic (PLEG): container finished" podID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" exitCode=0 Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.531144 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.531163 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerDied","Data":"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b"} Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.532031 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-697d97f7c8-w5jhq" event={"ID":"cec62c65-a846-4cc0-bb51-01d2d70c4c85","Type":"ContainerDied","Data":"ed7560860908ee6c4f83f3490cbdd1843d5adf7ac8051897ed017552b83ca2ee"} Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.532059 4725 scope.go:117] "RemoveContainer" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.567366 4725 scope.go:117] "RemoveContainer" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" Jan 20 11:11:43 crc kubenswrapper[4725]: E0120 11:11:43.569401 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b\": container with ID starting with 8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b not found: ID does not exist" containerID="8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.569403 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.569453 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b"} err="failed to get container status \"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b\": rpc error: code = NotFound desc = could not find container \"8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b\": container with ID starting with 8a8b25d03f991998693369336d59387900063f6ca5d2b3f0359c5e5416c3ec8b not found: ID does not exist" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.575810 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-697d97f7c8-w5jhq"] Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.629153 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-6dzml"] Jan 20 11:11:43 crc kubenswrapper[4725]: W0120 11:11:43.632759 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pode1530fd1_1850_4d4f_b6a7_cc1784d9c399.slice/crio-5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf WatchSource:0}: Error finding container 5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf: Status 404 returned error can't find the container with id 5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.882779 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-hm4k5"] Jan 20 11:11:43 crc kubenswrapper[4725]: E0120 11:11:43.883017 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.883035 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.883176 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" containerName="registry" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.883921 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.887920 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"community-operators-dockercfg-dmngl" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.892665 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm4k5"] Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.992188 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-utilities\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.992260 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-catalog-content\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:43 crc kubenswrapper[4725]: I0120 11:11:43.992335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlkb5\" (UniqueName: \"kubernetes.io/projected/da38c2a2-fb87-4115-ac25-0256bee850ae-kube-api-access-qlkb5\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.093689 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-catalog-content\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.093787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlkb5\" (UniqueName: \"kubernetes.io/projected/da38c2a2-fb87-4115-ac25-0256bee850ae-kube-api-access-qlkb5\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.093860 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-utilities\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.094393 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-catalog-content\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.094414 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/da38c2a2-fb87-4115-ac25-0256bee850ae-utilities\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.113546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlkb5\" (UniqueName: \"kubernetes.io/projected/da38c2a2-fb87-4115-ac25-0256bee850ae-kube-api-access-qlkb5\") pod \"community-operators-hm4k5\" (UID: \"da38c2a2-fb87-4115-ac25-0256bee850ae\") " pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.203026 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.542130 4725 generic.go:334] "Generic (PLEG): container finished" podID="e1530fd1-1850-4d4f-b6a7-cc1784d9c399" containerID="f764718ca9a5b6ac659a1d7302281a4f92ac07e802e076380a4f9c3dc2f6a39a" exitCode=0 Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.542512 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerDied","Data":"f764718ca9a5b6ac659a1d7302281a4f92ac07e802e076380a4f9c3dc2f6a39a"} Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.542573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerStarted","Data":"5b6e07754a1cd4fb4b59a5611f8e1c88ad608e92ed527529c9aa3aaa60c418bf"} Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.552584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerStarted","Data":"f941a6e2c8f5d7761cdfac57414cafaea4f486589d48240bcfa7b604979a0a9d"} Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.796261 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-hm4k5"] Jan 20 11:11:44 crc kubenswrapper[4725]: W0120 11:11:44.841698 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podda38c2a2_fb87_4115_ac25_0256bee850ae.slice/crio-b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6 WatchSource:0}: Error finding container b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6: Status 404 returned error can't find the container with id b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6 Jan 20 11:11:44 crc kubenswrapper[4725]: I0120 11:11:44.941702 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cec62c65-a846-4cc0-bb51-01d2d70c4c85" path="/var/lib/kubelet/pods/cec62c65-a846-4cc0-bb51-01d2d70c4c85/volumes" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.281995 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.282929 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.284991 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"redhat-marketplace-dockercfg-x2ctb" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.293012 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.437456 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.437553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.437630 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.539525 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.539598 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.539667 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.541012 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.541127 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.563312 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"redhat-marketplace-hz6gm\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.577917 4725 generic.go:334] "Generic (PLEG): container finished" podID="2c4020a9-4953-4dee-8bc0-2329493c8b8a" containerID="f941a6e2c8f5d7761cdfac57414cafaea4f486589d48240bcfa7b604979a0a9d" exitCode=0 Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.578016 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerDied","Data":"f941a6e2c8f5d7761cdfac57414cafaea4f486589d48240bcfa7b604979a0a9d"} Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.585230 4725 generic.go:334] "Generic (PLEG): container finished" podID="da38c2a2-fb87-4115-ac25-0256bee850ae" containerID="f62d075a93b6fe9e16a57eaedd21e95b4746f4b271035e9245ac949b7f419b8c" exitCode=0 Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.585326 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerDied","Data":"f62d075a93b6fe9e16a57eaedd21e95b4746f4b271035e9245ac949b7f419b8c"} Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.585366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerStarted","Data":"b3bf2b8aa63fbfabc07ce204356eb517e91c3ddd522fe7e914e908a97b7c1ec6"} Jan 20 11:11:45 crc kubenswrapper[4725]: I0120 11:11:45.610261 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.051633 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.596465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hht7w" event={"ID":"2c4020a9-4953-4dee-8bc0-2329493c8b8a","Type":"ContainerStarted","Data":"3dd1b4dc4fa2bdc681f5c471e9f8f3bd74508115900d1dbbf1e0bc9f0487534a"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.598024 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" exitCode=0 Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.598182 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.598359 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerStarted","Data":"620e1c951a5a2604e4ce57c3358b1935e7a5f6d46eec1265f136ddf73f1fb079"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.600110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerStarted","Data":"b108a48d976e53e2951586fecb05498b34546b6ee68450d00491a99c445ae608"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.602171 4725 generic.go:334] "Generic (PLEG): container finished" podID="e1530fd1-1850-4d4f-b6a7-cc1784d9c399" containerID="683185dc2054d27f858b64fc845b60cf512ece9e9ae65f544542bd1a27883a18" exitCode=0 Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.602203 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerDied","Data":"683185dc2054d27f858b64fc845b60cf512ece9e9ae65f544542bd1a27883a18"} Jan 20 11:11:46 crc kubenswrapper[4725]: I0120 11:11:46.625398 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hht7w" podStartSLOduration=2.128617826 podStartE2EDuration="5.625381835s" podCreationTimestamp="2026-01-20 11:11:41 +0000 UTC" firstStartedPulling="2026-01-20 11:11:42.525437921 +0000 UTC m=+430.733759884" lastFinishedPulling="2026-01-20 11:11:46.02220191 +0000 UTC m=+434.230523893" observedRunningTime="2026-01-20 11:11:46.621882057 +0000 UTC m=+434.830204050" watchObservedRunningTime="2026-01-20 11:11:46.625381835 +0000 UTC m=+434.833703808" Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.621272 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-6dzml" event={"ID":"e1530fd1-1850-4d4f-b6a7-cc1784d9c399","Type":"ContainerStarted","Data":"caf58d02456c9e340d612bb66dd695db47c6c3ef907e95bbdc47015fdaaac498"} Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.633689 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerDied","Data":"b108a48d976e53e2951586fecb05498b34546b6ee68450d00491a99c445ae608"} Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.635277 4725 generic.go:334] "Generic (PLEG): container finished" podID="da38c2a2-fb87-4115-ac25-0256bee850ae" containerID="b108a48d976e53e2951586fecb05498b34546b6ee68450d00491a99c445ae608" exitCode=0 Jan 20 11:11:47 crc kubenswrapper[4725]: I0120 11:11:47.652880 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-6dzml" podStartSLOduration=3.117943335 podStartE2EDuration="5.652860997s" podCreationTimestamp="2026-01-20 11:11:42 +0000 UTC" firstStartedPulling="2026-01-20 11:11:44.545000986 +0000 UTC m=+432.753322969" lastFinishedPulling="2026-01-20 11:11:47.079918658 +0000 UTC m=+435.288240631" observedRunningTime="2026-01-20 11:11:47.648802311 +0000 UTC m=+435.857124314" watchObservedRunningTime="2026-01-20 11:11:47.652860997 +0000 UTC m=+435.861182970" Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.643063 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" exitCode=0 Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.643138 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98"} Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.646488 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-hm4k5" event={"ID":"da38c2a2-fb87-4115-ac25-0256bee850ae","Type":"ContainerStarted","Data":"5a039b5561beca49ad265852ed78bc72b62a83dbc64c3518fa94ead2a122c7d7"} Jan 20 11:11:48 crc kubenswrapper[4725]: I0120 11:11:48.696810 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-hm4k5" podStartSLOduration=3.231847082 podStartE2EDuration="5.696789934s" podCreationTimestamp="2026-01-20 11:11:43 +0000 UTC" firstStartedPulling="2026-01-20 11:11:45.586587169 +0000 UTC m=+433.794909142" lastFinishedPulling="2026-01-20 11:11:48.051530011 +0000 UTC m=+436.259851994" observedRunningTime="2026-01-20 11:11:48.693342017 +0000 UTC m=+436.901663990" watchObservedRunningTime="2026-01-20 11:11:48.696789934 +0000 UTC m=+436.905111907" Jan 20 11:11:50 crc kubenswrapper[4725]: I0120 11:11:50.661279 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerStarted","Data":"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9"} Jan 20 11:11:50 crc kubenswrapper[4725]: I0120 11:11:50.687154 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-hz6gm" podStartSLOduration=2.750471904 podStartE2EDuration="5.687138565s" podCreationTimestamp="2026-01-20 11:11:45 +0000 UTC" firstStartedPulling="2026-01-20 11:11:46.599261398 +0000 UTC m=+434.807583371" lastFinishedPulling="2026-01-20 11:11:49.535928059 +0000 UTC m=+437.744250032" observedRunningTime="2026-01-20 11:11:50.682646244 +0000 UTC m=+438.890968207" watchObservedRunningTime="2026-01-20 11:11:50.687138565 +0000 UTC m=+438.895460528" Jan 20 11:11:51 crc kubenswrapper[4725]: I0120 11:11:51.812190 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:51 crc kubenswrapper[4725]: I0120 11:11:51.812266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:11:52 crc kubenswrapper[4725]: I0120 11:11:52.862168 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hht7w" podUID="2c4020a9-4953-4dee-8bc0-2329493c8b8a" containerName="registry-server" probeResult="failure" output=< Jan 20 11:11:52 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:11:52 crc kubenswrapper[4725]: > Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.231426 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.231484 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.276169 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:53 crc kubenswrapper[4725]: I0120 11:11:53.738312 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-6dzml" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.204177 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.205943 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.249100 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:54 crc kubenswrapper[4725]: I0120 11:11:54.725728 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-hm4k5" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.610653 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.610794 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.659994 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:11:55 crc kubenswrapper[4725]: I0120 11:11:55.734576 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:12:01 crc kubenswrapper[4725]: I0120 11:12:01.849849 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:12:01 crc kubenswrapper[4725]: I0120 11:12:01.894159 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hht7w" Jan 20 11:13:56 crc kubenswrapper[4725]: I0120 11:13:56.728013 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:13:56 crc kubenswrapper[4725]: I0120 11:13:56.728891 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:14:26 crc kubenswrapper[4725]: I0120 11:14:26.727950 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:14:26 crc kubenswrapper[4725]: I0120 11:14:26.729115 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.727941 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.728623 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.728701 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.729725 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.729906 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b" gracePeriod=600 Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.996343 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b" exitCode=0 Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.996860 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b"} Jan 20 11:14:56 crc kubenswrapper[4725]: I0120 11:14:56.996905 4725 scope.go:117] "RemoveContainer" containerID="c6e4775cbb437f357a123202b57e1186c3ce260099a44987956a10f515d8ed5f" Jan 20 11:14:58 crc kubenswrapper[4725]: I0120 11:14:58.008355 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2"} Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.177276 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.178596 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.182022 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.191322 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.195737 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.397576 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.397946 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.398001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.499619 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.499693 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.499747 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.501261 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.506687 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.518101 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"collect-profiles-29481795-mbt22\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.598103 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:00 crc kubenswrapper[4725]: I0120 11:15:00.819635 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 11:15:00 crc kubenswrapper[4725]: W0120 11:15:00.829137 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod7a41df2e_87f8_4dc4_a80c_36bd1bac44aa.slice/crio-be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331 WatchSource:0}: Error finding container be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331: Status 404 returned error can't find the container with id be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331 Jan 20 11:15:01 crc kubenswrapper[4725]: I0120 11:15:01.027669 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerStarted","Data":"df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61"} Jan 20 11:15:01 crc kubenswrapper[4725]: I0120 11:15:01.027727 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerStarted","Data":"be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331"} Jan 20 11:15:01 crc kubenswrapper[4725]: I0120 11:15:01.053841 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" podStartSLOduration=1.053816659 podStartE2EDuration="1.053816659s" podCreationTimestamp="2026-01-20 11:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:15:01.050201824 +0000 UTC m=+629.258523807" watchObservedRunningTime="2026-01-20 11:15:01.053816659 +0000 UTC m=+629.262138632" Jan 20 11:15:02 crc kubenswrapper[4725]: I0120 11:15:02.036291 4725 generic.go:334] "Generic (PLEG): container finished" podID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerID="df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61" exitCode=0 Jan 20 11:15:02 crc kubenswrapper[4725]: I0120 11:15:02.036357 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerDied","Data":"df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61"} Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.294709 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.445431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") pod \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.445634 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") pod \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.445685 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") pod \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\" (UID: \"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa\") " Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.447652 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume" (OuterVolumeSpecName: "config-volume") pod "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" (UID: "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.453704 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w" (OuterVolumeSpecName: "kube-api-access-c5h9w") pod "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" (UID: "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa"). InnerVolumeSpecName "kube-api-access-c5h9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.456019 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" (UID: "7a41df2e-87f8-4dc4-a80c-36bd1bac44aa"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.547972 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.548034 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-c5h9w\" (UniqueName: \"kubernetes.io/projected/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-kube-api-access-c5h9w\") on node \"crc\" DevicePath \"\"" Jan 20 11:15:03 crc kubenswrapper[4725]: I0120 11:15:03.548045 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:15:04 crc kubenswrapper[4725]: I0120 11:15:04.055997 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" event={"ID":"7a41df2e-87f8-4dc4-a80c-36bd1bac44aa","Type":"ContainerDied","Data":"be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331"} Jan 20 11:15:04 crc kubenswrapper[4725]: I0120 11:15:04.056105 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22" Jan 20 11:15:04 crc kubenswrapper[4725]: I0120 11:15:04.056072 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="be88c9f544298f20c2fcb9c3a56f32347d37bc49982511f8c9805369d3941331" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.638450 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639562 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" containerID="cri-o://f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639659 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" containerID="cri-o://647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639712 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" containerID="cri-o://9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639752 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" containerID="cri-o://eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.639769 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.640053 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" containerID="cri-o://d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.640173 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" containerID="cri-o://62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.681654 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" containerID="cri-o://3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" gracePeriod=30 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.833993 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835016 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/1.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835075 4725 generic.go:334] "Generic (PLEG): container finished" podID="627f7c97-4173-413f-a90e-e2c5e058c53b" containerID="02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5" exitCode=2 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835222 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerDied","Data":"02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835275 4725 scope.go:117] "RemoveContainer" containerID="31582cdcadbdaf1ab01a8b97fcd1abb4352c22086d2562704ab13dd4f470cea6" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.835847 4725 scope.go:117] "RemoveContainer" containerID="02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5" Jan 20 11:16:40 crc kubenswrapper[4725]: E0120 11:16:40.836223 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-multus\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-multus pod=multus-vchwb_openshift-multus(627f7c97-4173-413f-a90e-e2c5e058c53b)\"" pod="openshift-multus/multus-vchwb" podUID="627f7c97-4173-413f-a90e-e2c5e058c53b" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.841736 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovnkube-controller/3.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.844817 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-acl-logging/0.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.845409 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-controller/0.log" Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846001 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" exitCode=0 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846034 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" exitCode=0 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846045 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" exitCode=0 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846053 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" exitCode=143 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846061 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" exitCode=143 Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846100 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846132 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846155 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.846165 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} Jan 20 11:16:40 crc kubenswrapper[4725]: I0120 11:16:40.971703 4725 scope.go:117] "RemoveContainer" containerID="62eb60e15f0bdb732329d3d5b45d66a9b6d257c960813ac5b5c41c5bd096b241" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.444240 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-acl-logging/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.445445 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-controller/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.446321 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517593 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-qbj7d"] Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517860 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517880 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517894 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerName="collect-profiles" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517901 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerName="collect-profiles" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kubecfg-setup" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kubecfg-setup" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517924 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517931 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517940 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517946 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517952 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517957 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517966 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517971 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517980 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.517986 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.517995 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518001 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518009 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518015 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518023 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518029 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518039 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518045 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518053 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518058 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518231 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518269 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518276 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-acl-logging" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518285 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="northd" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518293 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-node" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518304 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovn-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518313 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518320 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" containerName="collect-profiles" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518327 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="sbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518333 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="nbdb" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518341 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="kube-rbac-proxy-ovn-metrics" Jan 20 11:16:41 crc kubenswrapper[4725]: E0120 11:16:41.518451 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518458 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518580 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.518773 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerName="ovnkube-controller" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.520374 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592767 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592802 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592818 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592850 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592893 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.592912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593046 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593098 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593141 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log" (OuterVolumeSpecName: "node-log") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593137 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash" (OuterVolumeSpecName: "host-slash") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593117 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket" (OuterVolumeSpecName: "log-socket") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593281 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593311 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593326 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593387 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593395 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593447 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593516 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593638 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593625 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593734 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593836 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.593875 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594211 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594253 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594277 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594389 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") pod \"9143f3c2-a068-494d-b7e1-4200c04394a3\" (UID: \"9143f3c2-a068-494d-b7e1-4200c04394a3\") " Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594425 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594454 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594554 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594747 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-etc-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594808 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66c21855-eb77-483d-8eeb-4e8803477516-ovn-node-metrics-cert\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594911 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-config\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594963 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.594985 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-log-socket\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595007 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595054 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r8dk\" (UniqueName: \"kubernetes.io/projected/66c21855-eb77-483d-8eeb-4e8803477516-kube-api-access-4r8dk\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595122 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-script-lib\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595145 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-systemd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-ovn\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595194 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-node-log\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595273 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-systemd-units\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595299 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-slash\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595328 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595350 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-bin\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595377 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-kubelet\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595403 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-var-lib-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595418 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-netd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595434 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-netns\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595452 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-env-overrides\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595503 4725 reconciler_common.go:293] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595514 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595525 4725 reconciler_common.go:293] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595537 4725 reconciler_common.go:293] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595548 4725 reconciler_common.go:293] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595558 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595568 4725 reconciler_common.go:293] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595584 4725 reconciler_common.go:293] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-slash\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595595 4725 reconciler_common.go:293] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595604 4725 reconciler_common.go:293] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-node-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595615 4725 reconciler_common.go:293] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-log-socket\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595645 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595654 4725 reconciler_common.go:293] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595663 4725 reconciler_common.go:293] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/9143f3c2-a068-494d-b7e1-4200c04394a3-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595671 4725 reconciler_common.go:293] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595681 4725 reconciler_common.go:293] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.595690 4725 reconciler_common.go:293] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.603699 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k" (OuterVolumeSpecName: "kube-api-access-fsm7k") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "kube-api-access-fsm7k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.607245 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.613097 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "9143f3c2-a068-494d-b7e1-4200c04394a3" (UID: "9143f3c2-a068-494d-b7e1-4200c04394a3"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696818 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696874 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696900 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-log-socket\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696927 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4r8dk\" (UniqueName: \"kubernetes.io/projected/66c21855-eb77-483d-8eeb-4e8803477516-kube-api-access-4r8dk\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696952 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-systemd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.696981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-script-lib\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697001 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-ovn\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697020 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-node-log\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-systemd-units\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697031 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697107 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-systemd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-run-ovn\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697178 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-log-socket\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697069 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-slash\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697158 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-slash\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697209 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-node-log\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697042 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697242 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-systemd-units\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697255 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697280 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-bin\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697312 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-kubelet\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697329 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-netd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697365 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-var-lib-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697371 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-kubelet\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697384 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-netns\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-bin\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-ovn-kubernetes\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697409 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-env-overrides\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-cni-netd\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697446 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-var-lib-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697449 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-host-run-netns\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697549 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-etc-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66c21855-eb77-483d-8eeb-4e8803477516-ovn-node-metrics-cert\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697624 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-config\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697712 4725 reconciler_common.go:293] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/9143f3c2-a068-494d-b7e1-4200c04394a3-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697726 4725 reconciler_common.go:293] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/9143f3c2-a068-494d-b7e1-4200c04394a3-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697738 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fsm7k\" (UniqueName: \"kubernetes.io/projected/9143f3c2-a068-494d-b7e1-4200c04394a3-kube-api-access-fsm7k\") on node \"crc\" DevicePath \"\"" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697972 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-script-lib\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.697977 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/66c21855-eb77-483d-8eeb-4e8803477516-etc-openvswitch\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.698198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-env-overrides\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.699293 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/66c21855-eb77-483d-8eeb-4e8803477516-ovnkube-config\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.702626 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/66c21855-eb77-483d-8eeb-4e8803477516-ovn-node-metrics-cert\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.716769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4r8dk\" (UniqueName: \"kubernetes.io/projected/66c21855-eb77-483d-8eeb-4e8803477516-kube-api-access-4r8dk\") pod \"ovnkube-node-qbj7d\" (UID: \"66c21855-eb77-483d-8eeb-4e8803477516\") " pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.841293 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.876145 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.882262 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-acl-logging/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.882827 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-nz9p5_9143f3c2-a068-494d-b7e1-4200c04394a3/ovn-controller/0.log" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883320 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" exitCode=0 Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883379 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" exitCode=0 Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883387 4725 generic.go:334] "Generic (PLEG): container finished" podID="9143f3c2-a068-494d-b7e1-4200c04394a3" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" exitCode=0 Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883424 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883498 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883512 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883522 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" event={"ID":"9143f3c2-a068-494d-b7e1-4200c04394a3","Type":"ContainerDied","Data":"841bffee0b69f32791094c7c4308ffc3cc66e9ac1d4699a14fb043fa42825bd2"} Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883553 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.883725 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-nz9p5" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.924817 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.952448 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.958977 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.962317 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-nz9p5"] Jan 20 11:16:41 crc kubenswrapper[4725]: I0120 11:16:41.976395 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.004379 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.019110 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.033390 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.049428 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.064219 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.079425 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.079950 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080026 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} err="failed to get container status \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080160 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.080662 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080696 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} err="failed to get container status \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.080712 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.081096 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081138 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} err="failed to get container status \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081164 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.081471 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081498 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} err="failed to get container status \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081516 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.081921 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081945 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} err="failed to get container status \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.081961 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.082417 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082445 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} err="failed to get container status \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082457 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.082685 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082703 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} err="failed to get container status \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.082714 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.083067 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083128 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} err="failed to get container status \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083153 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: E0120 11:16:42.083601 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083631 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} err="failed to get container status \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083649 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083933 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} err="failed to get container status \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.083979 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084453 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} err="failed to get container status \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084488 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084738 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} err="failed to get container status \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084766 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.084990 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} err="failed to get container status \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085014 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085368 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} err="failed to get container status \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085393 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085787 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} err="failed to get container status \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.085807 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086040 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} err="failed to get container status \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086057 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086287 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} err="failed to get container status \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086322 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086635 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} err="failed to get container status \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086653 4725 scope.go:117] "RemoveContainer" containerID="3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086864 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2"} err="failed to get container status \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": rpc error: code = NotFound desc = could not find container \"3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2\": container with ID starting with 3a87a44c01a271a16d402b438b446c4ea597f7ec0c00c3ce7c21554a0fddf0d2 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.086881 4725 scope.go:117] "RemoveContainer" containerID="62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087118 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f"} err="failed to get container status \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": rpc error: code = NotFound desc = could not find container \"62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f\": container with ID starting with 62ebf2770de4442cdd379d94f25692ddd534c17ea22870f9768b5188f0a9339f not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087144 4725 scope.go:117] "RemoveContainer" containerID="647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087393 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7"} err="failed to get container status \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": rpc error: code = NotFound desc = could not find container \"647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7\": container with ID starting with 647cd389c2874d14bd5491ddcb6c6425b2682b4fa7057b028621b1978760e1d7 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087411 4725 scope.go:117] "RemoveContainer" containerID="d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087780 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0"} err="failed to get container status \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": rpc error: code = NotFound desc = could not find container \"d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0\": container with ID starting with d056363d7adc7290ef276c9059d996268437309c97f2e0e920cf2c7240d9adc0 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.087796 4725 scope.go:117] "RemoveContainer" containerID="4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088006 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604"} err="failed to get container status \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": rpc error: code = NotFound desc = could not find container \"4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604\": container with ID starting with 4ba0f9a7d0d7fd6ab0da8d89a8986aef791c400f2784b7c8d3718d041d42e604 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088024 4725 scope.go:117] "RemoveContainer" containerID="9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088254 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385"} err="failed to get container status \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": rpc error: code = NotFound desc = could not find container \"9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385\": container with ID starting with 9d0f65e95f279fae49e99237cd15c15834bc569399dea0813bbc8559d1806385 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088275 4725 scope.go:117] "RemoveContainer" containerID="eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088555 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5"} err="failed to get container status \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": rpc error: code = NotFound desc = could not find container \"eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5\": container with ID starting with eedd23d4d522d61b7fe0152169b53e5c8eb3a27caf68e44736ea794acd059ba5 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088573 4725 scope.go:117] "RemoveContainer" containerID="f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088852 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348"} err="failed to get container status \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": rpc error: code = NotFound desc = could not find container \"f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348\": container with ID starting with f28242aaf7db018ceb0791d2019891b0a0732d617454d8f2aa2816d095df2348 not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.088885 4725 scope.go:117] "RemoveContainer" containerID="4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.089272 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa"} err="failed to get container status \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": rpc error: code = NotFound desc = could not find container \"4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa\": container with ID starting with 4bfb66926a8147fdaf09477073c6e0dd310d2876cfa9136f9586a531a54a85aa not found: ID does not exist" Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.891709 4725 generic.go:334] "Generic (PLEG): container finished" podID="66c21855-eb77-483d-8eeb-4e8803477516" containerID="267d0a5dc1f83bfd374c9db2dd9c3173b4e4d0c8fc7cfbe5669976f31cbdf605" exitCode=0 Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.891821 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerDied","Data":"267d0a5dc1f83bfd374c9db2dd9c3173b4e4d0c8fc7cfbe5669976f31cbdf605"} Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.891901 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"f0f6941bebb498b63e09e40bc3e98f840083e4422b8a1ad7f981d685553f9263"} Jan 20 11:16:42 crc kubenswrapper[4725]: I0120 11:16:42.945148 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9143f3c2-a068-494d-b7e1-4200c04394a3" path="/var/lib/kubelet/pods/9143f3c2-a068-494d-b7e1-4200c04394a3/volumes" Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.906380 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"7900a170afa7c3b38123c28e7d1d7311049655b23f17fc5059d1f3650d6f6121"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907138 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"b4edda44f128780cad5ad58c8de0ddf729304cb662ce97362399ef2f8363b776"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907218 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"e7b2351d5b7b60b5743dd60b52c3542c03af392dee279c87243125eac7aa0e1c"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907275 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"d93ae5e3f743fd3c342fcdefae1e722ceffab74156067d38ca526eef5ef8e84a"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907290 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"715944d8f1c3d0efdba2c59573b071dbeec420d24e3abe8d654adbe2a3a7326a"} Jan 20 11:16:43 crc kubenswrapper[4725]: I0120 11:16:43.907306 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"a0672bc8ee8d903e456de464247c06850d230ad18a283107e77289753c516165"} Jan 20 11:16:46 crc kubenswrapper[4725]: I0120 11:16:46.940143 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"d83cc0768146a80e2dd6582b826ad127a9e1b78f55ac1af98690ef546b30c842"} Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.962737 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" event={"ID":"66c21855-eb77-483d-8eeb-4e8803477516","Type":"ContainerStarted","Data":"36a1f11c1523d46876acecd0379c70cc20b5e839c5611ddb31ff2304f6bc096a"} Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.964705 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.964784 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.964842 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:48 crc kubenswrapper[4725]: I0120 11:16:48.993506 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:49 crc kubenswrapper[4725]: I0120 11:16:49.009617 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:16:49 crc kubenswrapper[4725]: I0120 11:16:49.009673 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" podStartSLOduration=8.009645216 podStartE2EDuration="8.009645216s" podCreationTimestamp="2026-01-20 11:16:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:16:49.002618565 +0000 UTC m=+737.210940538" watchObservedRunningTime="2026-01-20 11:16:49.009645216 +0000 UTC m=+737.217967189" Jan 20 11:16:53 crc kubenswrapper[4725]: I0120 11:16:53.932875 4725 scope.go:117] "RemoveContainer" containerID="02f0aeb59f3b42d33162846abb8c9018dd20f1ace3b32284fe79541bb421e0f5" Jan 20 11:16:55 crc kubenswrapper[4725]: I0120 11:16:55.049870 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:16:55 crc kubenswrapper[4725]: I0120 11:16:55.050369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-vchwb" event={"ID":"627f7c97-4173-413f-a90e-e2c5e058c53b","Type":"ContainerStarted","Data":"0573fb223e7e2b51cbcc09d07e819561bb8d437ed9d4c425afb03dd444701a6b"} Jan 20 11:17:11 crc kubenswrapper[4725]: I0120 11:17:11.866210 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-qbj7d" Jan 20 11:17:18 crc kubenswrapper[4725]: I0120 11:17:18.250129 4725 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 20 11:17:26 crc kubenswrapper[4725]: I0120 11:17:26.728037 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:17:26 crc kubenswrapper[4725]: I0120 11:17:26.728729 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.116174 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.120963 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-hz6gm" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" containerID="cri-o://77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" gracePeriod=30 Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.570860 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.634818 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") pod \"5f5afef1-c036-41b7-a884-72ee03a01ea9\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.636103 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") pod \"5f5afef1-c036-41b7-a884-72ee03a01ea9\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.636157 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") pod \"5f5afef1-c036-41b7-a884-72ee03a01ea9\" (UID: \"5f5afef1-c036-41b7-a884-72ee03a01ea9\") " Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.637666 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities" (OuterVolumeSpecName: "utilities") pod "5f5afef1-c036-41b7-a884-72ee03a01ea9" (UID: "5f5afef1-c036-41b7-a884-72ee03a01ea9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.645558 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2" (OuterVolumeSpecName: "kube-api-access-7wls2") pod "5f5afef1-c036-41b7-a884-72ee03a01ea9" (UID: "5f5afef1-c036-41b7-a884-72ee03a01ea9"). InnerVolumeSpecName "kube-api-access-7wls2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.649989 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" exitCode=0 Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650070 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9"} Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650140 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-hz6gm" event={"ID":"5f5afef1-c036-41b7-a884-72ee03a01ea9","Type":"ContainerDied","Data":"620e1c951a5a2604e4ce57c3358b1935e7a5f6d46eec1265f136ddf73f1fb079"} Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650163 4725 scope.go:117] "RemoveContainer" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.650354 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-hz6gm" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.666742 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f5afef1-c036-41b7-a884-72ee03a01ea9" (UID: "5f5afef1-c036-41b7-a884-72ee03a01ea9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.678115 4725 scope.go:117] "RemoveContainer" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.695858 4725 scope.go:117] "RemoveContainer" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.715391 4725 scope.go:117] "RemoveContainer" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" Jan 20 11:17:52 crc kubenswrapper[4725]: E0120 11:17:52.716232 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9\": container with ID starting with 77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9 not found: ID does not exist" containerID="77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716276 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9"} err="failed to get container status \"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9\": rpc error: code = NotFound desc = could not find container \"77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9\": container with ID starting with 77a87817f3fe69dd7feb600c256c2aa093a8c92e4e04681a765d0272f487d0d9 not found: ID does not exist" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716365 4725 scope.go:117] "RemoveContainer" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" Jan 20 11:17:52 crc kubenswrapper[4725]: E0120 11:17:52.716765 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98\": container with ID starting with fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98 not found: ID does not exist" containerID="fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716794 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98"} err="failed to get container status \"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98\": rpc error: code = NotFound desc = could not find container \"fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98\": container with ID starting with fc5e5d5b4ed3a4dc0238a259d481b0857d19f493ef0334cdfe81629cccac0d98 not found: ID does not exist" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.716808 4725 scope.go:117] "RemoveContainer" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" Jan 20 11:17:52 crc kubenswrapper[4725]: E0120 11:17:52.717118 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0\": container with ID starting with 510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0 not found: ID does not exist" containerID="510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.717137 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0"} err="failed to get container status \"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0\": rpc error: code = NotFound desc = could not find container \"510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0\": container with ID starting with 510ca6f6e8912929553b8efdb405702730623d4998b299dd17775d18e33cd0c0 not found: ID does not exist" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.737938 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.737996 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f5afef1-c036-41b7-a884-72ee03a01ea9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.738012 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7wls2\" (UniqueName: \"kubernetes.io/projected/5f5afef1-c036-41b7-a884-72ee03a01ea9-kube-api-access-7wls2\") on node \"crc\" DevicePath \"\"" Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.980242 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:17:52 crc kubenswrapper[4725]: I0120 11:17:52.984927 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-hz6gm"] Jan 20 11:17:54 crc kubenswrapper[4725]: I0120 11:17:54.942397 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" path="/var/lib/kubelet/pods/5f5afef1-c036-41b7-a884-72ee03a01ea9/volumes" Jan 20 11:17:56 crc kubenswrapper[4725]: I0120 11:17:56.727791 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:17:56 crc kubenswrapper[4725]: I0120 11:17:56.728455 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.029827 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm"] Jan 20 11:17:57 crc kubenswrapper[4725]: E0120 11:17:57.030326 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-utilities" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030353 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-utilities" Jan 20 11:17:57 crc kubenswrapper[4725]: E0120 11:17:57.030387 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-content" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030395 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="extract-content" Jan 20 11:17:57 crc kubenswrapper[4725]: E0120 11:17:57.030403 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030411 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.030583 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f5afef1-c036-41b7-a884-72ee03a01ea9" containerName="registry-server" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.031754 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.035535 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.054806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm"] Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.214866 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.214994 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.215070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.316337 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.316463 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.316503 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.317238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.317238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.342893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.351551 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.597607 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm"] Jan 20 11:17:57 crc kubenswrapper[4725]: W0120 11:17:57.616400 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod418d6042_ac1e_433e_a820_04d774775787.slice/crio-aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d WatchSource:0}: Error finding container aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d: Status 404 returned error can't find the container with id aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d Jan 20 11:17:57 crc kubenswrapper[4725]: I0120 11:17:57.686363 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerStarted","Data":"aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d"} Jan 20 11:17:58 crc kubenswrapper[4725]: I0120 11:17:58.694926 4725 generic.go:334] "Generic (PLEG): container finished" podID="418d6042-ac1e-433e-a820-04d774775787" containerID="1dc958a87de3cd6c497c14ff1be6f25007b46e4183a208911c83119472655356" exitCode=0 Jan 20 11:17:58 crc kubenswrapper[4725]: I0120 11:17:58.695351 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"1dc958a87de3cd6c497c14ff1be6f25007b46e4183a208911c83119472655356"} Jan 20 11:17:58 crc kubenswrapper[4725]: I0120 11:17:58.697703 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.166025 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.168489 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.193341 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.262228 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.262722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.262780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.363757 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.363969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.364017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.364703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.364938 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.389212 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"redhat-operators-6dxvd\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.517200 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.737937 4725 generic.go:334] "Generic (PLEG): container finished" podID="418d6042-ac1e-433e-a820-04d774775787" containerID="7dda32b0ab9711e9a299668d44b89608ce2dd3ed01b455c87737a2b1a6e42351" exitCode=0 Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.738366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"7dda32b0ab9711e9a299668d44b89608ce2dd3ed01b455c87737a2b1a6e42351"} Jan 20 11:18:00 crc kubenswrapper[4725]: I0120 11:18:00.906853 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:00 crc kubenswrapper[4725]: W0120 11:18:00.916724 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddfbfb8b9_615e_477a_9ab8_112b0c09aa12.slice/crio-9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49 WatchSource:0}: Error finding container 9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49: Status 404 returned error can't find the container with id 9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49 Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.747200 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerID="b28c935c40fb5964b74a8daaace2c11004f108bb7d072c1c2c0d741d5ef699dd" exitCode=0 Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.747290 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"b28c935c40fb5964b74a8daaace2c11004f108bb7d072c1c2c0d741d5ef699dd"} Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.747397 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerStarted","Data":"9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49"} Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.751620 4725 generic.go:334] "Generic (PLEG): container finished" podID="418d6042-ac1e-433e-a820-04d774775787" containerID="575ece06f97dc30c0ba79ac587e8a491b2103baf494e59a2bebe0cde72fa96c4" exitCode=0 Jan 20 11:18:01 crc kubenswrapper[4725]: I0120 11:18:01.751660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"575ece06f97dc30c0ba79ac587e8a491b2103baf494e59a2bebe0cde72fa96c4"} Jan 20 11:18:02 crc kubenswrapper[4725]: I0120 11:18:02.761006 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerStarted","Data":"b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed"} Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.093622 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.214357 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") pod \"418d6042-ac1e-433e-a820-04d774775787\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.214448 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") pod \"418d6042-ac1e-433e-a820-04d774775787\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.214484 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") pod \"418d6042-ac1e-433e-a820-04d774775787\" (UID: \"418d6042-ac1e-433e-a820-04d774775787\") " Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.217114 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle" (OuterVolumeSpecName: "bundle") pod "418d6042-ac1e-433e-a820-04d774775787" (UID: "418d6042-ac1e-433e-a820-04d774775787"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.218840 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.236124 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9" (OuterVolumeSpecName: "kube-api-access-2tmw9") pod "418d6042-ac1e-433e-a820-04d774775787" (UID: "418d6042-ac1e-433e-a820-04d774775787"). InnerVolumeSpecName "kube-api-access-2tmw9". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.320004 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-2tmw9\" (UniqueName: \"kubernetes.io/projected/418d6042-ac1e-433e-a820-04d774775787-kube-api-access-2tmw9\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.399371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util" (OuterVolumeSpecName: "util") pod "418d6042-ac1e-433e-a820-04d774775787" (UID: "418d6042-ac1e-433e-a820-04d774775787"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.421788 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/418d6042-ac1e-433e-a820-04d774775787-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.773499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" event={"ID":"418d6042-ac1e-433e-a820-04d774775787","Type":"ContainerDied","Data":"aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d"} Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.773570 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="aa384c5a04834e66491369345a4edd73840cca8a01789608ebb703106975759d" Jan 20 11:18:03 crc kubenswrapper[4725]: I0120 11:18:03.773634 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm" Jan 20 11:18:04 crc kubenswrapper[4725]: I0120 11:18:04.783217 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerID="b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed" exitCode=0 Jan 20 11:18:04 crc kubenswrapper[4725]: I0120 11:18:04.783309 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed"} Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193416 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk"] Jan 20 11:18:05 crc kubenswrapper[4725]: E0120 11:18:05.193657 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="util" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193671 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="util" Jan 20 11:18:05 crc kubenswrapper[4725]: E0120 11:18:05.193684 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="extract" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193691 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="extract" Jan 20 11:18:05 crc kubenswrapper[4725]: E0120 11:18:05.193708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="pull" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193715 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="pull" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.193824 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="418d6042-ac1e-433e-a820-04d774775787" containerName="extract" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.195131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.200577 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-marketplace"/"default-dockercfg-vmwhc" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.210366 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk"] Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.395943 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.396123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.396183 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.498240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.498327 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.498387 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.500146 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.500195 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.522668 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.792267 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerStarted","Data":"b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4"} Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.815227 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-6dxvd" podStartSLOduration=2.255387679 podStartE2EDuration="5.815205489s" podCreationTimestamp="2026-01-20 11:18:00 +0000 UTC" firstStartedPulling="2026-01-20 11:18:01.750625081 +0000 UTC m=+809.958947054" lastFinishedPulling="2026-01-20 11:18:05.310442891 +0000 UTC m=+813.518764864" observedRunningTime="2026-01-20 11:18:05.813649319 +0000 UTC m=+814.021971292" watchObservedRunningTime="2026-01-20 11:18:05.815205489 +0000 UTC m=+814.023527462" Jan 20 11:18:05 crc kubenswrapper[4725]: I0120 11:18:05.816935 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.043231 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms"] Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.044395 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.066419 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms"] Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.145042 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk"] Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.210227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.210552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.210639 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.311920 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312049 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312455 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312739 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.312995 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.335709 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.367508 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.716958 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms"] Jan 20 11:18:06 crc kubenswrapper[4725]: W0120 11:18:06.726828 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podea19653a_0b47_400b_bcce_8034cb7f6d55.slice/crio-53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b WatchSource:0}: Error finding container 53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b: Status 404 returned error can't find the container with id 53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.801212 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerStarted","Data":"53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b"} Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.802949 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"e05bf73be14de89e1588664ae1d96a70523c14053222557ccd985b4afd63f9c2"} Jan 20 11:18:06 crc kubenswrapper[4725]: I0120 11:18:06.802995 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"f83197650a1b5fbe35a37eba7340df1e95f9e5e5cf1734b547d1388f2b52f207"} Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.822466 4725 generic.go:334] "Generic (PLEG): container finished" podID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerID="e05bf73be14de89e1588664ae1d96a70523c14053222557ccd985b4afd63f9c2" exitCode=0 Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.823308 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"e05bf73be14de89e1588664ae1d96a70523c14053222557ccd985b4afd63f9c2"} Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.826807 4725 generic.go:334] "Generic (PLEG): container finished" podID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerID="afcd024f507cdc4dfa0390785e098262bc44374a6f830f19979d6218c6e45d66" exitCode=0 Jan 20 11:18:07 crc kubenswrapper[4725]: I0120 11:18:07.826875 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"afcd024f507cdc4dfa0390785e098262bc44374a6f830f19979d6218c6e45d66"} Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.809967 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.811438 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.850328 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.852230 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.871686 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"2b1f48a6ff2c2ab3ca2ed1d1ab3ea83cb646b049eebfcbda42a7c54067eb83dd"} Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.873316 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.912611 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.912727 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:10 crc kubenswrapper[4725]: I0120 11:18:10.912783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.014399 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.014469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.014531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.016189 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.016942 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.118462 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"certified-operators-vvv86\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.402141 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.978902 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-6dxvd" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" probeResult="failure" output=< Jan 20 11:18:11 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:18:11 crc kubenswrapper[4725]: > Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.987439 4725 generic.go:334] "Generic (PLEG): container finished" podID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerID="2b1f48a6ff2c2ab3ca2ed1d1ab3ea83cb646b049eebfcbda42a7c54067eb83dd" exitCode=0 Jan 20 11:18:11 crc kubenswrapper[4725]: I0120 11:18:11.987518 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"2b1f48a6ff2c2ab3ca2ed1d1ab3ea83cb646b049eebfcbda42a7c54067eb83dd"} Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.101994 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd"] Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.104915 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.131559 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd"] Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.355608 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.355701 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.355783 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.436037 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.457021 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.457121 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.457186 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.458035 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.459210 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.503711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:12 crc kubenswrapper[4725]: I0120 11:18:12.710582 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.016978 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerStarted","Data":"ef7a2f43f95e56c61116413b01b184fa86e12ae3172ad5e0fede61298f0a6842"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.027419 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerStarted","Data":"71662f0ef7ef440bca1d87dc2d21ee57905d03942d4ac656bc52786b52bcd3b1"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.035157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.035227 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"a51a3e201153e9052123f62f1b87986d749e718183596e10305fe985accf5553"} Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.364984 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" podStartSLOduration=5.908023232 podStartE2EDuration="8.364959083s" podCreationTimestamp="2026-01-20 11:18:05 +0000 UTC" firstStartedPulling="2026-01-20 11:18:07.826239157 +0000 UTC m=+816.034561150" lastFinishedPulling="2026-01-20 11:18:10.283175028 +0000 UTC m=+818.491497001" observedRunningTime="2026-01-20 11:18:13.361676029 +0000 UTC m=+821.569998012" watchObservedRunningTime="2026-01-20 11:18:13.364959083 +0000 UTC m=+821.573281066" Jan 20 11:18:13 crc kubenswrapper[4725]: I0120 11:18:13.673291 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd"] Jan 20 11:18:13 crc kubenswrapper[4725]: W0120 11:18:13.678883 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod10d53364_23ca_4726_bed9_460fb6763fa1.slice/crio-f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10 WatchSource:0}: Error finding container f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10: Status 404 returned error can't find the container with id f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10 Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.192668 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerStarted","Data":"f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10"} Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.194804 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4e296b6-b743-4253-8266-848212ba1001" containerID="e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c" exitCode=0 Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.194855 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c"} Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.205863 4725 generic.go:334] "Generic (PLEG): container finished" podID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerID="ef7a2f43f95e56c61116413b01b184fa86e12ae3172ad5e0fede61298f0a6842" exitCode=0 Jan 20 11:18:14 crc kubenswrapper[4725]: I0120 11:18:14.206722 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"ef7a2f43f95e56c61116413b01b184fa86e12ae3172ad5e0fede61298f0a6842"} Jan 20 11:18:15 crc kubenswrapper[4725]: I0120 11:18:15.370961 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerStarted","Data":"96eea0696ae654e58012771a58a060f675a08683ebec8e9078a27e5e945d55c6"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.434111 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerStarted","Data":"5e227c11d87415c07d800131112fe615e9dec133403066a5b7e1a417c675b996"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.436672 4725 generic.go:334] "Generic (PLEG): container finished" podID="10d53364-23ca-4726-bed9-460fb6763fa1" containerID="96eea0696ae654e58012771a58a060f675a08683ebec8e9078a27e5e945d55c6" exitCode=0 Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.436766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"96eea0696ae654e58012771a58a060f675a08683ebec8e9078a27e5e945d55c6"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.439803 4725 generic.go:334] "Generic (PLEG): container finished" podID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerID="71662f0ef7ef440bca1d87dc2d21ee57905d03942d4ac656bc52786b52bcd3b1" exitCode=0 Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.439881 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"71662f0ef7ef440bca1d87dc2d21ee57905d03942d4ac656bc52786b52bcd3b1"} Jan 20 11:18:16 crc kubenswrapper[4725]: I0120 11:18:16.565448 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" podStartSLOduration=6.643830703 podStartE2EDuration="10.565430222s" podCreationTimestamp="2026-01-20 11:18:06 +0000 UTC" firstStartedPulling="2026-01-20 11:18:07.830804302 +0000 UTC m=+816.039126275" lastFinishedPulling="2026-01-20 11:18:11.752403821 +0000 UTC m=+819.960725794" observedRunningTime="2026-01-20 11:18:16.562626593 +0000 UTC m=+824.770948576" watchObservedRunningTime="2026-01-20 11:18:16.565430222 +0000 UTC m=+824.773752185" Jan 20 11:18:17 crc kubenswrapper[4725]: I0120 11:18:17.447908 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546"} Jan 20 11:18:17 crc kubenswrapper[4725]: I0120 11:18:17.450842 4725 generic.go:334] "Generic (PLEG): container finished" podID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerID="5e227c11d87415c07d800131112fe615e9dec133403066a5b7e1a417c675b996" exitCode=0 Jan 20 11:18:17 crc kubenswrapper[4725]: I0120 11:18:17.450911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"5e227c11d87415c07d800131112fe615e9dec133403066a5b7e1a417c675b996"} Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.773672 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.924695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") pod \"484dd827-7fd5-4cbc-878f-400b31b6179c\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.924754 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") pod \"484dd827-7fd5-4cbc-878f-400b31b6179c\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.924826 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") pod \"484dd827-7fd5-4cbc-878f-400b31b6179c\" (UID: \"484dd827-7fd5-4cbc-878f-400b31b6179c\") " Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.942285 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle" (OuterVolumeSpecName: "bundle") pod "484dd827-7fd5-4cbc-878f-400b31b6179c" (UID: "484dd827-7fd5-4cbc-878f-400b31b6179c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.952282 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq" (OuterVolumeSpecName: "kube-api-access-52thq") pod "484dd827-7fd5-4cbc-878f-400b31b6179c" (UID: "484dd827-7fd5-4cbc-878f-400b31b6179c"). InnerVolumeSpecName "kube-api-access-52thq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:18 crc kubenswrapper[4725]: I0120 11:18:18.969314 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util" (OuterVolumeSpecName: "util") pod "484dd827-7fd5-4cbc-878f-400b31b6179c" (UID: "484dd827-7fd5-4cbc-878f-400b31b6179c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.026881 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.026939 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/484dd827-7fd5-4cbc-878f-400b31b6179c-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.026951 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-52thq\" (UniqueName: \"kubernetes.io/projected/484dd827-7fd5-4cbc-878f-400b31b6179c-kube-api-access-52thq\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.286384 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.334118 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") pod \"ea19653a-0b47-400b-bcce-8034cb7f6d55\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.334175 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") pod \"ea19653a-0b47-400b-bcce-8034cb7f6d55\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.335779 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle" (OuterVolumeSpecName: "bundle") pod "ea19653a-0b47-400b-bcce-8034cb7f6d55" (UID: "ea19653a-0b47-400b-bcce-8034cb7f6d55"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.337560 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2" (OuterVolumeSpecName: "kube-api-access-zflz2") pod "ea19653a-0b47-400b-bcce-8034cb7f6d55" (UID: "ea19653a-0b47-400b-bcce-8034cb7f6d55"). InnerVolumeSpecName "kube-api-access-zflz2". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.459945 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") pod \"ea19653a-0b47-400b-bcce-8034cb7f6d55\" (UID: \"ea19653a-0b47-400b-bcce-8034cb7f6d55\") " Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.460275 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.460290 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zflz2\" (UniqueName: \"kubernetes.io/projected/ea19653a-0b47-400b-bcce-8034cb7f6d55-kube-api-access-zflz2\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.495445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util" (OuterVolumeSpecName: "util") pod "ea19653a-0b47-400b-bcce-8034cb7f6d55" (UID: "ea19653a-0b47-400b-bcce-8034cb7f6d55"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.562168 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/ea19653a-0b47-400b-bcce-8034cb7f6d55-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.699966 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4e296b6-b743-4253-8266-848212ba1001" containerID="c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546" exitCode=0 Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.700109 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546"} Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.733851 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" event={"ID":"ea19653a-0b47-400b-bcce-8034cb7f6d55","Type":"ContainerDied","Data":"53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b"} Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.733929 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="53e9a49e6f0bceb5eee877a245693c04ece0f113b83cdcb71638504428b9b03b" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.733869 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.782795 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" event={"ID":"484dd827-7fd5-4cbc-878f-400b31b6179c","Type":"ContainerDied","Data":"f83197650a1b5fbe35a37eba7340df1e95f9e5e5cf1734b547d1388f2b52f207"} Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.782864 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f83197650a1b5fbe35a37eba7340df1e95f9e5e5cf1734b547d1388f2b52f207" Jan 20 11:18:19 crc kubenswrapper[4725]: I0120 11:18:19.783000 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk" Jan 20 11:18:20 crc kubenswrapper[4725]: I0120 11:18:20.648736 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:20 crc kubenswrapper[4725]: I0120 11:18:20.715914 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:20 crc kubenswrapper[4725]: I0120 11:18:20.799524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerStarted","Data":"ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61"} Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189284 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg"] Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189572 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189598 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189610 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189616 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="util" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189627 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189633 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189640 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189648 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189656 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189662 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: E0120 11:18:21.189673 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189679 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="pull" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189852 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="484dd827-7fd5-4cbc-878f-400b31b6179c" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.189876 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ea19653a-0b47-400b-bcce-8034cb7f6d55" containerName="extract" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.190420 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.192630 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"openshift-service-ca.crt" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.192783 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-dockercfg-r68zl" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.193640 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operators"/"kube-root-ca.crt" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.214597 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.296892 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8x66r\" (UniqueName: \"kubernetes.io/projected/0bc9f0db-ee2d-43d3-8fc7-66f2b155c710-kube-api-access-8x66r\") pod \"obo-prometheus-operator-68bc856cb9-sl5rg\" (UID: \"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.338955 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.339773 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.357434 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-service-cert" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.357487 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"obo-prometheus-operator-admission-webhook-dockercfg-2qzm8" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.372800 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.373647 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.397719 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8x66r\" (UniqueName: \"kubernetes.io/projected/0bc9f0db-ee2d-43d3-8fc7-66f2b155c710-kube-api-access-8x66r\") pod \"obo-prometheus-operator-68bc856cb9-sl5rg\" (UID: \"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.397780 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.397815 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.401407 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.432635 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.447384 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8x66r\" (UniqueName: \"kubernetes.io/projected/0bc9f0db-ee2d-43d3-8fc7-66f2b155c710-kube-api-access-8x66r\") pod \"obo-prometheus-operator-68bc856cb9-sl5rg\" (UID: \"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710\") " pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.535097 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536259 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536613 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.536687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.547934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.563448 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a5d78053-6a08-448a-93ca-1c0e2334617a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5\" (UID: \"a5d78053-6a08-448a-93ca-1c0e2334617a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.638534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.638621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.640835 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cjnzp"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.644707 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.650760 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/05acb89f-79ef-4e5a-8713-af3abbf86d5a-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-c88c9f498-lh85b\" (UID: \"05acb89f-79ef-4e5a-8713-af3abbf86d5a\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.657436 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.666388 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.670537 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-sa-dockercfg-jpqnf" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.670850 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"observability-operator-tls" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.685366 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cjnzp"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.693442 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.748099 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5czqx\" (UniqueName: \"kubernetes.io/projected/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-kube-api-access-5czqx\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.748176 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.849675 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.849793 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5czqx\" (UniqueName: \"kubernetes.io/projected/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-kube-api-access-5czqx\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.855019 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-observability-operator-tls\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.882796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5czqx\" (UniqueName: \"kubernetes.io/projected/ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002-kube-api-access-5czqx\") pod \"observability-operator-59bdc8b94-cjnzp\" (UID: \"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002\") " pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.900284 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckz5m"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.901131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.911775 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operators"/"perses-operator-dockercfg-zcpfd" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.917943 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckz5m"] Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.986066 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vvv86" podStartSLOduration=5.724848405 podStartE2EDuration="11.986035453s" podCreationTimestamp="2026-01-20 11:18:10 +0000 UTC" firstStartedPulling="2026-01-20 11:18:14.197348534 +0000 UTC m=+822.405670507" lastFinishedPulling="2026-01-20 11:18:20.458535582 +0000 UTC m=+828.666857555" observedRunningTime="2026-01-20 11:18:21.978603168 +0000 UTC m=+830.186925161" watchObservedRunningTime="2026-01-20 11:18:21.986035453 +0000 UTC m=+830.194357426" Jan 20 11:18:21 crc kubenswrapper[4725]: I0120 11:18:21.994957 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.070208 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.070294 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h5nhp\" (UniqueName: \"kubernetes.io/projected/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-kube-api-access-h5nhp\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.172346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.172895 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h5nhp\" (UniqueName: \"kubernetes.io/projected/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-kube-api-access-h5nhp\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.174754 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-openshift-service-ca\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.204854 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h5nhp\" (UniqueName: \"kubernetes.io/projected/5a2dcc7a-6d62-412d-a25f-fea592c85bf5-kube-api-access-h5nhp\") pod \"perses-operator-5bf474d74f-ckz5m\" (UID: \"5a2dcc7a-6d62-412d-a25f-fea592c85bf5\") " pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.233489 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:22 crc kubenswrapper[4725]: I0120 11:18:22.846553 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5"] Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.033632 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg"] Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.063953 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b"] Jan 20 11:18:23 crc kubenswrapper[4725]: W0120 11:18:23.103242 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod05acb89f_79ef_4e5a_8713_af3abbf86d5a.slice/crio-032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea WatchSource:0}: Error finding container 032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea: Status 404 returned error can't find the container with id 032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.109866 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-59bdc8b94-cjnzp"] Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.311789 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-5bf474d74f-ckz5m"] Jan 20 11:18:23 crc kubenswrapper[4725]: W0120 11:18:23.316438 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5a2dcc7a_6d62_412d_a25f_fea592c85bf5.slice/crio-7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804 WatchSource:0}: Error finding container 7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804: Status 404 returned error can't find the container with id 7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804 Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.972233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" event={"ID":"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002","Type":"ContainerStarted","Data":"c1de0eb75d154e63b4916dc1e6f8f88ec95285377d50f04f6e80570b1fbf778b"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.973609 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" event={"ID":"05acb89f-79ef-4e5a-8713-af3abbf86d5a","Type":"ContainerStarted","Data":"032d77d60c1fe0014f2d818102d23804daf1b5e22577061adb37e1fe80c594ea"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.974402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" event={"ID":"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710","Type":"ContainerStarted","Data":"fb06f071a9a039e6d50f013cf80952e3e708fd47a93aa93f0cc92f67e516a839"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.975210 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" event={"ID":"5a2dcc7a-6d62-412d-a25f-fea592c85bf5","Type":"ContainerStarted","Data":"7bf3f348e71816881246fbaf15bbad1a3cde68876578c739a3ba6453442ea804"} Jan 20 11:18:23 crc kubenswrapper[4725]: I0120 11:18:23.976229 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" event={"ID":"a5d78053-6a08-448a-93ca-1c0e2334617a","Type":"ContainerStarted","Data":"885a97651c7e0433abf095423f0e90eff0d1ae1198320ffd0e551b5d406aa354"} Jan 20 11:18:25 crc kubenswrapper[4725]: I0120 11:18:25.146713 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:25 crc kubenswrapper[4725]: I0120 11:18:25.147221 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-6dxvd" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" containerID="cri-o://b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4" gracePeriod=2 Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.092904 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-6886c99b94-tzbc7"] Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.094183 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.099448 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-service-cert" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.099787 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"kube-root-ca.crt" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.099927 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"openshift-service-ca.crt" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.101289 4725 generic.go:334] "Generic (PLEG): container finished" podID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerID="b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4" exitCode=0 Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.101337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4"} Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.102964 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elastic-operator-dockercfg-mh884" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.195723 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntxxn\" (UniqueName: \"kubernetes.io/projected/ce11e344-b219-4b22-b05b-a21b78fc7d98-kube-api-access-ntxxn\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.196097 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-webhook-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.196211 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-apiservice-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.298260 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntxxn\" (UniqueName: \"kubernetes.io/projected/ce11e344-b219-4b22-b05b-a21b78fc7d98-kube-api-access-ntxxn\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.298400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-webhook-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.298437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-apiservice-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.306445 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-apiservice-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.320528 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/ce11e344-b219-4b22-b05b-a21b78fc7d98-webhook-cert\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.441241 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntxxn\" (UniqueName: \"kubernetes.io/projected/ce11e344-b219-4b22-b05b-a21b78fc7d98-kube-api-access-ntxxn\") pod \"elastic-operator-6886c99b94-tzbc7\" (UID: \"ce11e344-b219-4b22-b05b-a21b78fc7d98\") " pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.480383 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6886c99b94-tzbc7"] Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.716283 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.730141 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.730238 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.730300 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.731482 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:18:26 crc kubenswrapper[4725]: I0120 11:18:26.731553 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2" gracePeriod=600 Jan 20 11:18:27 crc kubenswrapper[4725]: I0120 11:18:27.121852 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2" exitCode=0 Jan 20 11:18:27 crc kubenswrapper[4725]: I0120 11:18:27.121930 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2"} Jan 20 11:18:27 crc kubenswrapper[4725]: I0120 11:18:27.121982 4725 scope.go:117] "RemoveContainer" containerID="76a81355d30bc66a5871f02822f0a9240f627473cc4697ea154fb735e225c69b" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.571880 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.733047 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") pod \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.733331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") pod \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.733467 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") pod \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\" (UID: \"dfbfb8b9-615e-477a-9ab8-112b0c09aa12\") " Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.737445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities" (OuterVolumeSpecName: "utilities") pod "dfbfb8b9-615e-477a-9ab8-112b0c09aa12" (UID: "dfbfb8b9-615e-477a-9ab8-112b0c09aa12"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.748589 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49" (OuterVolumeSpecName: "kube-api-access-nrm49") pod "dfbfb8b9-615e-477a-9ab8-112b0c09aa12" (UID: "dfbfb8b9-615e-477a-9ab8-112b0c09aa12"). InnerVolumeSpecName "kube-api-access-nrm49". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.838794 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-nrm49\" (UniqueName: \"kubernetes.io/projected/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-kube-api-access-nrm49\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.839484 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.914946 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dfbfb8b9-615e-477a-9ab8-112b0c09aa12" (UID: "dfbfb8b9-615e-477a-9ab8-112b0c09aa12"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.939972 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dfbfb8b9-615e-477a-9ab8-112b0c09aa12-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:28 crc kubenswrapper[4725]: I0120 11:18:28.981174 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-6886c99b94-tzbc7"] Jan 20 11:18:28 crc kubenswrapper[4725]: W0120 11:18:28.995295 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podce11e344_b219_4b22_b05b_a21b78fc7d98.slice/crio-bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9 WatchSource:0}: Error finding container bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9: Status 404 returned error can't find the container with id bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9 Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.167177 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" event={"ID":"ce11e344-b219-4b22-b05b-a21b78fc7d98","Type":"ContainerStarted","Data":"bdaf1e47694847c13f48f8ef68d543bfff55003761b6498c40d53ec6c385d0a9"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.175459 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-6dxvd" event={"ID":"dfbfb8b9-615e-477a-9ab8-112b0c09aa12","Type":"ContainerDied","Data":"9e71a7c6e17b663ce879958f77ba79bc2e033518300716782480d4a94f0a9c49"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.175535 4725 scope.go:117] "RemoveContainer" containerID="b0f70b712c905a55b259faee2c4b5bab10f818119e51a23f7359145520a977d4" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.175684 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-6dxvd" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.186639 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerStarted","Data":"18f46d6d120071cafa0d0486418f2f1a267e6e4ccb6923aa5ce9fdea31b10509"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.200983 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.202850 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946"} Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.215108 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-6dxvd"] Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.233998 4725 scope.go:117] "RemoveContainer" containerID="b832293d87b16279d6b773637db081d939113e1ece2da21f76dbd46e266d78ed" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.355558 4725 scope.go:117] "RemoveContainer" containerID="b28c935c40fb5964b74a8daaace2c11004f108bb7d072c1c2c0d741d5ef699dd" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.642925 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7p9dr"] Jan 20 11:18:29 crc kubenswrapper[4725]: E0120 11:18:29.643748 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643769 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" Jan 20 11:18:29 crc kubenswrapper[4725]: E0120 11:18:29.643787 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-content" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643795 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-content" Jan 20 11:18:29 crc kubenswrapper[4725]: E0120 11:18:29.643818 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-utilities" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643826 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="extract-utilities" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.643950 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" containerName="registry-server" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.644854 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.647673 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"interconnect-operator-dockercfg-q4m8g" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.657569 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7p9dr"] Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.758260 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lt87\" (UniqueName: \"kubernetes.io/projected/a923dc59-d518-4ee4-a92c-1bb5ad6e7158-kube-api-access-9lt87\") pod \"interconnect-operator-5bb49f789d-7p9dr\" (UID: \"a923dc59-d518-4ee4-a92c-1bb5ad6e7158\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.860308 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9lt87\" (UniqueName: \"kubernetes.io/projected/a923dc59-d518-4ee4-a92c-1bb5ad6e7158-kube-api-access-9lt87\") pod \"interconnect-operator-5bb49f789d-7p9dr\" (UID: \"a923dc59-d518-4ee4-a92c-1bb5ad6e7158\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.885205 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9lt87\" (UniqueName: \"kubernetes.io/projected/a923dc59-d518-4ee4-a92c-1bb5ad6e7158-kube-api-access-9lt87\") pod \"interconnect-operator-5bb49f789d-7p9dr\" (UID: \"a923dc59-d518-4ee4-a92c-1bb5ad6e7158\") " pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:29 crc kubenswrapper[4725]: I0120 11:18:29.972452 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.287378 4725 generic.go:334] "Generic (PLEG): container finished" podID="10d53364-23ca-4726-bed9-460fb6763fa1" containerID="18f46d6d120071cafa0d0486418f2f1a267e6e4ccb6923aa5ce9fdea31b10509" exitCode=0 Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.287572 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"18f46d6d120071cafa0d0486418f2f1a267e6e4ccb6923aa5ce9fdea31b10509"} Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.427852 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-5bb49f789d-7p9dr"] Jan 20 11:18:30 crc kubenswrapper[4725]: W0120 11:18:30.449699 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-poda923dc59_d518_4ee4_a92c_1bb5ad6e7158.slice/crio-253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841 WatchSource:0}: Error finding container 253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841: Status 404 returned error can't find the container with id 253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841 Jan 20 11:18:30 crc kubenswrapper[4725]: I0120 11:18:30.953202 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dfbfb8b9-615e-477a-9ab8-112b0c09aa12" path="/var/lib/kubelet/pods/dfbfb8b9-615e-477a-9ab8-112b0c09aa12/volumes" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.155721 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.156040 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.230253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" event={"ID":"a923dc59-d518-4ee4-a92c-1bb5ad6e7158","Type":"ContainerStarted","Data":"253c2cce710af84682cf4560c3a6ffb6cd2947af322302ef2c8998eb3dc50841"} Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.262257 4725 generic.go:334] "Generic (PLEG): container finished" podID="10d53364-23ca-4726-bed9-460fb6763fa1" containerID="523aace2da8268f02b1c1009bb3b3093590c510c65568a8b4238b8dfa2bb2bed" exitCode=0 Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.262332 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"523aace2da8268f02b1c1009bb3b3093590c510c65568a8b4238b8dfa2bb2bed"} Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.292343 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:32 crc kubenswrapper[4725]: I0120 11:18:32.393175 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:36 crc kubenswrapper[4725]: I0120 11:18:36.151545 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:36 crc kubenswrapper[4725]: I0120 11:18:36.155681 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vvv86" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" containerID="cri-o://ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" gracePeriod=2 Jan 20 11:18:37 crc kubenswrapper[4725]: I0120 11:18:37.314201 4725 generic.go:334] "Generic (PLEG): container finished" podID="d4e296b6-b743-4253-8266-848212ba1001" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" exitCode=0 Jan 20 11:18:37 crc kubenswrapper[4725]: I0120 11:18:37.314770 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61"} Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.403974 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.404996 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.405491 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:41 crc kubenswrapper[4725]: E0120 11:18:41.405535 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-vvv86" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:47.867379 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.033251 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") pod \"10d53364-23ca-4726-bed9-460fb6763fa1\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.033777 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") pod \"10d53364-23ca-4726-bed9-460fb6763fa1\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.033878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") pod \"10d53364-23ca-4726-bed9-460fb6763fa1\" (UID: \"10d53364-23ca-4726-bed9-460fb6763fa1\") " Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.034919 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle" (OuterVolumeSpecName: "bundle") pod "10d53364-23ca-4726-bed9-460fb6763fa1" (UID: "10d53364-23ca-4726-bed9-460fb6763fa1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.051999 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr" (OuterVolumeSpecName: "kube-api-access-tr6hr") pod "10d53364-23ca-4726-bed9-460fb6763fa1" (UID: "10d53364-23ca-4726-bed9-460fb6763fa1"). InnerVolumeSpecName "kube-api-access-tr6hr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.057897 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util" (OuterVolumeSpecName: "util") pod "10d53364-23ca-4726-bed9-460fb6763fa1" (UID: "10d53364-23ca-4726-bed9-460fb6763fa1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.137373 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tr6hr\" (UniqueName: \"kubernetes.io/projected/10d53364-23ca-4726-bed9-460fb6763fa1-kube-api-access-tr6hr\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.137473 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.137487 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/10d53364-23ca-4726-bed9-460fb6763fa1-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.421326 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" event={"ID":"10d53364-23ca-4726-bed9-460fb6763fa1","Type":"ContainerDied","Data":"f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10"} Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.421632 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="f1c998f0cf5c0c88b3f6016cefee38892bc62bcc818c2c7c870f6cd0cbb1bf10" Jan 20 11:18:48 crc kubenswrapper[4725]: I0120 11:18:48.421445 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.138756 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.139306 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_openshift-operators(a5d78053-6a08-448a-93ca-1c0e2334617a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.143455 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" podUID="a5d78053-6a08-448a-93ca-1c0e2334617a" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.149324 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.149552 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator-admission-webhook,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea,Command:[],Args:[--web.enable-tls=true --web.cert-file=/tmp/k8s-webhook-server/serving-certs/tls.crt --web.key-file=/tmp/k8s-webhook-server/serving-certs/tls.key],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{200 -3} {} 200m DecimalSI},memory: {{209715200 0} {} BinarySI},},Requests:ResourceList{cpu: {{50 -3} {} 50m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_openshift-operators(05acb89f-79ef-4e5a-8713-af3abbf86d5a): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.150852 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" podUID="05acb89f-79ef-4e5a-8713-af3abbf86d5a" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.432435 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" podUID="a5d78053-6a08-448a-93ca-1c0e2334617a" Jan 20 11:18:49 crc kubenswrapper[4725]: E0120 11:18:49.432894 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator-admission-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-admission-webhook-rhel9@sha256:42ebc3571195d8c41fd01b8d08e98fe2cc12c1caabea251aecb4442d8eade4ea\\\"\"" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" podUID="05acb89f-79ef-4e5a-8713-af3abbf86d5a" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.342892 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.343136 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus-operator,Image:registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a,Command:[],Args:[--prometheus-config-reloader=$(RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER) --prometheus-instance-selector=app.kubernetes.io/managed-by=observability-operator --alertmanager-instance-selector=app.kubernetes.io/managed-by=observability-operator --thanos-ruler-instance-selector=app.kubernetes.io/managed-by=observability-operator --watch-referenced-objects-in-all-namespaces=true --disable-unmanaged-prometheus-configuration=true],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:http,HostPort:0,ContainerPort:8080,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:GOGC,Value:30,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_PROMETHEUS_CONFIG_RELOADER,Value:registry.redhat.io/cluster-observability-operator/obo-prometheus-operator-prometheus-config-reloader-rhel9@sha256:9a2097bc5b2e02bc1703f64c452ce8fe4bc6775b732db930ff4770b76ae4653a,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cluster-observability-operator.v1.3.1,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{524288000 0} {} 500Mi BinarySI},},Requests:ResourceList{cpu: {{5 -3} {} 5m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-8x66r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod obo-prometheus-operator-68bc856cb9-sl5rg_openshift-operators(0bc9f0db-ee2d-43d3-8fc7-66f2b155c710): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.344626 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" podUID="0bc9f0db-ee2d-43d3-8fc7-66f2b155c710" Jan 20 11:18:50 crc kubenswrapper[4725]: E0120 11:18:50.437174 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cluster-observability-operator/obo-prometheus-rhel9-operator@sha256:e7e5f4c5e8ab0ba298ef0295a7137d438a42eb177d9322212cde6ba8f367912a\\\"\"" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" podUID="0bc9f0db-ee2d-43d3-8fc7-66f2b155c710" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.404394 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.406154 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.406960 4725 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" cmd=["grpc_health_probe","-addr=:50051"] Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.407057 4725 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/certified-operators-vvv86" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.471047 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.471353 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:manager,Image:registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105,Command:[],Args:[manager --config=/conf/eck.yaml --manage-webhook-certs=false --enable-webhook --ubi-only --distribution-channel=certified-operators],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https-webhook,HostPort:0,ContainerPort:9443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:NAMESPACES,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.operatorNamespace'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_IMAGE,Value:registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:elasticsearch-eck-operator-certified.v3.2.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{cpu: {{1 0} {} 1 DecimalSI},memory: {{1073741824 0} {} 1Gi BinarySI},},Requests:ResourceList{cpu: {{100 -3} {} 100m DecimalSI},memory: {{157286400 0} {} 150Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:apiservice-cert,ReadOnly:false,MountPath:/apiserver.local.config/certificates,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:webhook-cert,ReadOnly:false,MountPath:/tmp/k8s-webhook-server/serving-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ntxxn,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod elastic-operator-6886c99b94-tzbc7_service-telemetry(ce11e344-b219-4b22-b05b-a21b78fc7d98): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:18:51 crc kubenswrapper[4725]: E0120 11:18:51.472941 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" podUID="ce11e344-b219-4b22-b05b-a21b78fc7d98" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.528544 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.610418 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") pod \"d4e296b6-b743-4253-8266-848212ba1001\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.610540 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") pod \"d4e296b6-b743-4253-8266-848212ba1001\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.610626 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") pod \"d4e296b6-b743-4253-8266-848212ba1001\" (UID: \"d4e296b6-b743-4253-8266-848212ba1001\") " Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.611950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities" (OuterVolumeSpecName: "utilities") pod "d4e296b6-b743-4253-8266-848212ba1001" (UID: "d4e296b6-b743-4253-8266-848212ba1001"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.626972 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr" (OuterVolumeSpecName: "kube-api-access-dnsmr") pod "d4e296b6-b743-4253-8266-848212ba1001" (UID: "d4e296b6-b743-4253-8266-848212ba1001"). InnerVolumeSpecName "kube-api-access-dnsmr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.671055 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d4e296b6-b743-4253-8266-848212ba1001" (UID: "d4e296b6-b743-4253-8266-848212ba1001"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.712793 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.712869 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4e296b6-b743-4253-8266-848212ba1001-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:51 crc kubenswrapper[4725]: I0120 11:18:51.712884 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dnsmr\" (UniqueName: \"kubernetes.io/projected/d4e296b6-b743-4253-8266-848212ba1001-kube-api-access-dnsmr\") on node \"crc\" DevicePath \"\"" Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.452650 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vvv86" Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.452900 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vvv86" event={"ID":"d4e296b6-b743-4253-8266-848212ba1001","Type":"ContainerDied","Data":"a51a3e201153e9052123f62f1b87986d749e718183596e10305fe985accf5553"} Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.453593 4725 scope.go:117] "RemoveContainer" containerID="ade2a6063104572d0f624579a4a2d2d3ce2533474c7b4fb3e6e5a221d3b2fe61" Jan 20 11:18:52 crc kubenswrapper[4725]: E0120 11:18:52.454773 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manager\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.connect.redhat.com/elastic/eck-operator@sha256:28925fffef8f7c920b2510810cbcfc0f3dadab5f8a80b01fd5ae500e5c070105\\\"\"" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" podUID="ce11e344-b219-4b22-b05b-a21b78fc7d98" Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.500947 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.506157 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vvv86"] Jan 20 11:18:52 crc kubenswrapper[4725]: I0120 11:18:52.942065 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d4e296b6-b743-4253-8266-848212ba1001" path="/var/lib/kubelet/pods/d4e296b6-b743-4253-8266-848212ba1001/volumes" Jan 20 11:18:56 crc kubenswrapper[4725]: I0120 11:18:56.340464 4725 scope.go:117] "RemoveContainer" containerID="c24ec98af151bdfd2c547f9560f387e755da525ee903219fa6784697775e8546" Jan 20 11:18:56 crc kubenswrapper[4725]: I0120 11:18:56.411444 4725 scope.go:117] "RemoveContainer" containerID="e8a08a6d18e5019b0dee076330df7d141805a5d7b75d721ede3967e6a302582c" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.527480 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" event={"ID":"5a2dcc7a-6d62-412d-a25f-fea592c85bf5","Type":"ContainerStarted","Data":"d7981d56f83107dfdb67a66ae08dc92b86b0b5a09c0b8adfa83ebbd2415fbb0a"} Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.529174 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.530482 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" event={"ID":"ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002","Type":"ContainerStarted","Data":"42c5f7cedac5395ba98a70b66fb37997f02d2baf15a657dc7e86f3801eddfed6"} Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.530820 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.533092 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" event={"ID":"a923dc59-d518-4ee4-a92c-1bb5ad6e7158","Type":"ContainerStarted","Data":"76c8059c9ce0bac718250baa31b7abd576df85323cff90d10ef2a3ccca079460"} Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.563968 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" podStartSLOduration=3.995931246 podStartE2EDuration="36.56394375s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.31937384 +0000 UTC m=+831.527695813" lastFinishedPulling="2026-01-20 11:18:55.887386344 +0000 UTC m=+864.095708317" observedRunningTime="2026-01-20 11:18:57.55725283 +0000 UTC m=+865.765574813" watchObservedRunningTime="2026-01-20 11:18:57.56394375 +0000 UTC m=+865.772265723" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.605522 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" podStartSLOduration=3.9033286609999998 podStartE2EDuration="36.605507306s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.18648262 +0000 UTC m=+831.394804593" lastFinishedPulling="2026-01-20 11:18:55.888661255 +0000 UTC m=+864.096983238" observedRunningTime="2026-01-20 11:18:57.60148711 +0000 UTC m=+865.809809083" watchObservedRunningTime="2026-01-20 11:18:57.605507306 +0000 UTC m=+865.813829279" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.628027 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-5bb49f789d-7p9dr" podStartSLOduration=2.649023137 podStartE2EDuration="28.627994912s" podCreationTimestamp="2026-01-20 11:18:29 +0000 UTC" firstStartedPulling="2026-01-20 11:18:30.462553763 +0000 UTC m=+838.670875726" lastFinishedPulling="2026-01-20 11:18:56.441525528 +0000 UTC m=+864.649847501" observedRunningTime="2026-01-20 11:18:57.623152991 +0000 UTC m=+865.831474984" watchObservedRunningTime="2026-01-20 11:18:57.627994912 +0000 UTC m=+865.836316885" Jan 20 11:18:57 crc kubenswrapper[4725]: I0120 11:18:57.884473 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-59bdc8b94-cjnzp" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209100 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k"] Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209681 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="pull" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209696 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="pull" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209710 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209718 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209728 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="util" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209735 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="util" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209744 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-content" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209751 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-content" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209766 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-utilities" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209772 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="extract-utilities" Jan 20 11:19:01 crc kubenswrapper[4725]: E0120 11:19:01.209788 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="extract" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209795 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="extract" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209929 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="d4e296b6-b743-4253-8266-848212ba1001" containerName="registry-server" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.209945 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="10d53364-23ca-4726-bed9-460fb6763fa1" containerName="extract" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.210564 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.214430 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"kube-root-ca.crt" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.214878 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager-operator"/"openshift-service-ca.crt" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.219945 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager-operator"/"cert-manager-operator-controller-manager-dockercfg-6z2qj" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.229826 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k"] Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.391276 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5r89m\" (UniqueName: \"kubernetes.io/projected/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-kube-api-access-5r89m\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.391340 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.492348 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.492464 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5r89m\" (UniqueName: \"kubernetes.io/projected/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-kube-api-access-5r89m\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.492878 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-tmp\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.521620 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5r89m\" (UniqueName: \"kubernetes.io/projected/07b8a4cd-9f0f-405a-a03d-749bdd01dcce-kube-api-access-5r89m\") pod \"cert-manager-operator-controller-manager-5446d6888b-8p62k\" (UID: \"07b8a4cd-9f0f-405a-a03d-749bdd01dcce\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:01 crc kubenswrapper[4725]: I0120 11:19:01.532577 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" Jan 20 11:19:02 crc kubenswrapper[4725]: I0120 11:19:02.279065 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-5bf474d74f-ckz5m" Jan 20 11:19:02 crc kubenswrapper[4725]: I0120 11:19:02.356464 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k"] Jan 20 11:19:02 crc kubenswrapper[4725]: W0120 11:19:02.364485 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07b8a4cd_9f0f_405a_a03d_749bdd01dcce.slice/crio-3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9 WatchSource:0}: Error finding container 3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9: Status 404 returned error can't find the container with id 3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9 Jan 20 11:19:02 crc kubenswrapper[4725]: I0120 11:19:02.775106 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" event={"ID":"07b8a4cd-9f0f-405a-a03d-749bdd01dcce","Type":"ContainerStarted","Data":"3adf1e682c07eb4732aad80eccc599ca3c4c23db14b636014b4f7e97db110fc9"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.800539 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" event={"ID":"a5d78053-6a08-448a-93ca-1c0e2334617a","Type":"ContainerStarted","Data":"d0e1739a0253cf18b9a53d0437dcfd1486c75bf1be5683ebaf6a85995537d336"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.811788 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" event={"ID":"05acb89f-79ef-4e5a-8713-af3abbf86d5a","Type":"ContainerStarted","Data":"0c71ce19be34f4c5a0d39e505dd140cfcbce930abea4e67c4cec87def815ed1e"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.816852 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" event={"ID":"0bc9f0db-ee2d-43d3-8fc7-66f2b155c710","Type":"ContainerStarted","Data":"cc1009caa3f66e9dff968b62d354661b246a24d9b9b0d93229615ddb79b5e678"} Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.835884 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5" podStartSLOduration=3.130247287 podStartE2EDuration="43.835864701s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:22.995953941 +0000 UTC m=+831.204275914" lastFinishedPulling="2026-01-20 11:19:03.701571355 +0000 UTC m=+871.909893328" observedRunningTime="2026-01-20 11:19:04.831882475 +0000 UTC m=+873.040204458" watchObservedRunningTime="2026-01-20 11:19:04.835864701 +0000 UTC m=+873.044186674" Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.859576 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-68bc856cb9-sl5rg" podStartSLOduration=2.6908189609999997 podStartE2EDuration="43.859548975s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.028988923 +0000 UTC m=+831.237310896" lastFinishedPulling="2026-01-20 11:19:04.197718937 +0000 UTC m=+872.406040910" observedRunningTime="2026-01-20 11:19:04.85430965 +0000 UTC m=+873.062631683" watchObservedRunningTime="2026-01-20 11:19:04.859548975 +0000 UTC m=+873.067870948" Jan 20 11:19:04 crc kubenswrapper[4725]: I0120 11:19:04.877167 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-c88c9f498-lh85b" podStartSLOduration=-9223371992.977663 podStartE2EDuration="43.877113157s" podCreationTimestamp="2026-01-20 11:18:21 +0000 UTC" firstStartedPulling="2026-01-20 11:18:23.144764364 +0000 UTC m=+831.353086337" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:19:04.876534858 +0000 UTC m=+873.084856831" watchObservedRunningTime="2026-01-20 11:19:04.877113157 +0000 UTC m=+873.085435120" Jan 20 11:19:17 crc kubenswrapper[4725]: E0120 11:19:17.227204 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911" Jan 20 11:19:17 crc kubenswrapper[4725]: E0120 11:19:17.227933 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-operator,Image:registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911,Command:[/usr/bin/cert-manager-operator],Args:[start --v=$(OPERATOR_LOG_LEVEL) --trusted-ca-configmap=$(TRUSTED_CA_CONFIGMAP_NAME) --cloud-credentials-secret=$(CLOUD_CREDENTIALS_SECRET_NAME) --unsupported-addon-features=$(UNSUPPORTED_ADDON_FEATURES)],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:8443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:cert-manager-operator,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_WEBHOOK,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CA_INJECTOR,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_CONTROLLER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ACMESOLVER,Value:registry.redhat.io/cert-manager/jetstack-cert-manager-acmesolver-rhel9@sha256:ba937fc4b9eee31422914352c11a45b90754ba4fbe490ea45249b90afdc4e0a7,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CERT_MANAGER_ISTIOCSR,Value:registry.redhat.io/cert-manager/cert-manager-istio-csr-rhel9@sha256:af1ac813b8ee414ef215936f05197bc498bccbd540f3e2a93cb522221ba112bc,ValueFrom:nil,},EnvVar{Name:OPERAND_IMAGE_VERSION,Value:1.18.3,ValueFrom:nil,},EnvVar{Name:ISTIOCSR_OPERAND_IMAGE_VERSION,Value:0.14.2,ValueFrom:nil,},EnvVar{Name:OPERATOR_IMAGE_VERSION,Value:1.18.0,ValueFrom:nil,},EnvVar{Name:OPERATOR_LOG_LEVEL,Value:2,ValueFrom:nil,},EnvVar{Name:TRUSTED_CA_CONFIGMAP_NAME,Value:,ValueFrom:nil,},EnvVar{Name:CLOUD_CREDENTIALS_SECRET_NAME,Value:,ValueFrom:nil,},EnvVar{Name:UNSUPPORTED_ADDON_FEATURES,Value:,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:cert-manager-operator.v1.18.0,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{33554432 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5r89m,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*1000680000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-operator-controller-manager-5446d6888b-8p62k_cert-manager-operator(07b8a4cd-9f0f-405a-a03d-749bdd01dcce): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:19:17 crc kubenswrapper[4725]: E0120 11:19:17.229269 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" podUID="07b8a4cd-9f0f-405a-a03d-749bdd01dcce" Jan 20 11:19:18 crc kubenswrapper[4725]: I0120 11:19:18.082808 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" event={"ID":"ce11e344-b219-4b22-b05b-a21b78fc7d98","Type":"ContainerStarted","Data":"7b969b07c35fae20dba239a302f881f99dec25f23bc169ffb8329a5a827a4ddd"} Jan 20 11:19:18 crc kubenswrapper[4725]: E0120 11:19:18.085684 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-operator\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/cert-manager-operator-rhel9@sha256:fa8de363ab4435c1085ac37f1bad488828c6ae8ba361c5f865c27ef577610911\\\"\"" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" podUID="07b8a4cd-9f0f-405a-a03d-749bdd01dcce" Jan 20 11:19:18 crc kubenswrapper[4725]: I0120 11:19:18.132910 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-6886c99b94-tzbc7" podStartSLOduration=3.894345487 podStartE2EDuration="52.132885531s" podCreationTimestamp="2026-01-20 11:18:26 +0000 UTC" firstStartedPulling="2026-01-20 11:18:28.99899247 +0000 UTC m=+837.207314443" lastFinishedPulling="2026-01-20 11:19:17.237532514 +0000 UTC m=+885.445854487" observedRunningTime="2026-01-20 11:19:18.127816341 +0000 UTC m=+886.336138324" watchObservedRunningTime="2026-01-20 11:19:18.132885531 +0000 UTC m=+886.341207514" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.135039 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.136707 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.207878 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.208670 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.208788 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209004 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209159 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209319 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209526 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209600 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209738 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209801 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209855 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209908 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209924 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.209960 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.216736 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-dockercfg-rndtg" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217044 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-config" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217281 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-unicast-hosts" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217416 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-http-certs-internal" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217545 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"elasticsearch-es-scripts" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217694 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-default-es-transport-certs" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.217814 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-remote-ca" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.218657 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-xpack-file-realm" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.235676 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-internal-users" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.311950 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312019 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312068 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312123 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312150 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312177 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312254 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312302 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312321 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312339 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312365 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.312934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.313339 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.313689 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.313862 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.314068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.314295 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.315504 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.315729 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.323285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.335620 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.336534 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.336577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.336599 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.338702 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.339784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.361968 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:19 crc kubenswrapper[4725]: I0120 11:19:19.542399 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:20 crc kubenswrapper[4725]: W0120 11:19:20.048607 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf12e47b3_54a1_4f6b_8e7a_0dc9f25358f6.slice/crio-5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39 WatchSource:0}: Error finding container 5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39: Status 404 returned error can't find the container with id 5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39 Jan 20 11:19:20 crc kubenswrapper[4725]: I0120 11:19:20.050919 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:20 crc kubenswrapper[4725]: I0120 11:19:20.098713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerStarted","Data":"5fb7c044520a0e6d60d2d5811302ca733e0a5faa29d25fc77b3d0cbbfa7c9f39"} Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.523024 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" event={"ID":"07b8a4cd-9f0f-405a-a03d-749bdd01dcce","Type":"ContainerStarted","Data":"5f4f37aa9d44600fa14fc73b7a7443feb6748c56330307de679b17b3a3da6422"} Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.524205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerStarted","Data":"bf9180dab5339ddb58dac81d3d95278af49bc678c69d2ccffd4b22bef1b300a5"} Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.583553 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-5446d6888b-8p62k" podStartSLOduration=2.218155625 podStartE2EDuration="39.583530012s" podCreationTimestamp="2026-01-20 11:19:01 +0000 UTC" firstStartedPulling="2026-01-20 11:19:02.36869349 +0000 UTC m=+870.577015463" lastFinishedPulling="2026-01-20 11:19:39.734067877 +0000 UTC m=+907.942389850" observedRunningTime="2026-01-20 11:19:40.57997493 +0000 UTC m=+908.788296923" watchObservedRunningTime="2026-01-20 11:19:40.583530012 +0000 UTC m=+908.791851995" Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.777068 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:40 crc kubenswrapper[4725]: I0120 11:19:40.813015 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 20 11:19:42 crc kubenswrapper[4725]: I0120 11:19:42.544453 4725 generic.go:334] "Generic (PLEG): container finished" podID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerID="bf9180dab5339ddb58dac81d3d95278af49bc678c69d2ccffd4b22bef1b300a5" exitCode=0 Jan 20 11:19:42 crc kubenswrapper[4725]: I0120 11:19:42.544527 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerDied","Data":"bf9180dab5339ddb58dac81d3d95278af49bc678c69d2ccffd4b22bef1b300a5"} Jan 20 11:19:44 crc kubenswrapper[4725]: I0120 11:19:44.569931 4725 generic.go:334] "Generic (PLEG): container finished" podID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerID="a53abbcf725fa8441e49ba86debf3440670cc01b8f475a056b593122277e60f4" exitCode=0 Jan 20 11:19:44 crc kubenswrapper[4725]: I0120 11:19:44.570566 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerDied","Data":"a53abbcf725fa8441e49ba86debf3440670cc01b8f475a056b593122277e60f4"} Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.011817 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bxlks"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.013038 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.015189 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"openshift-service-ca.crt" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.015258 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"cert-manager"/"kube-root-ca.crt" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.015484 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-webhook-dockercfg-2dflb" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.034159 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bxlks"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.073762 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.075344 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.077270 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-global-ca" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079206 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-ca" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079386 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079256 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-1-sys-config" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079503 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079621 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079647 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079763 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079797 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bncq9\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-kube-api-access-bncq9\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079846 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079864 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079885 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079923 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079942 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.079958 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.088826 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181792 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181873 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181923 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.181962 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182013 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182038 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bncq9\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-kube-api-access-bncq9\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182245 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182545 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182907 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183304 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.182270 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183458 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183647 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183694 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183773 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183783 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183833 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.183869 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.184324 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.184539 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.184594 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.188780 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.189586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.201769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"service-telemetry-operator-1-build\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.204473 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-bound-sa-token\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.206393 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bncq9\" (UniqueName: \"kubernetes.io/projected/8b639e20-8ca7-4b37-8271-ada2858140b9-kube-api-access-bncq9\") pod \"cert-manager-webhook-f4fb5df64-bxlks\" (UID: \"8b639e20-8ca7-4b37-8271-ada2858140b9\") " pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.329567 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.428956 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.582774 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6","Type":"ContainerStarted","Data":"1c2f4b5de3a927025d64961d3fe81e9e36e4eda258298a00eeafcb2e26c4c7b8"} Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.583882 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:19:45 crc kubenswrapper[4725]: I0120 11:19:45.690130 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=6.742023269 podStartE2EDuration="26.690107586s" podCreationTimestamp="2026-01-20 11:19:19 +0000 UTC" firstStartedPulling="2026-01-20 11:19:20.05106986 +0000 UTC m=+888.259391833" lastFinishedPulling="2026-01-20 11:19:39.999154187 +0000 UTC m=+908.207476150" observedRunningTime="2026-01-20 11:19:45.686288146 +0000 UTC m=+913.894610119" watchObservedRunningTime="2026-01-20 11:19:45.690107586 +0000 UTC m=+913.898429559" Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.001806 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-f4fb5df64-bxlks"] Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.091538 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:46 crc kubenswrapper[4725]: W0120 11:19:46.121125 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod61bedcc7_14db_4cb4_b3df_04733ce92bb2.slice/crio-bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2 WatchSource:0}: Error finding container bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2: Status 404 returned error can't find the container with id bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2 Jan 20 11:19:46 crc kubenswrapper[4725]: W0120 11:19:46.135308 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod8b639e20_8ca7_4b37_8271_ada2858140b9.slice/crio-ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef WatchSource:0}: Error finding container ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef: Status 404 returned error can't find the container with id ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.589702 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"61bedcc7-14db-4cb4-b3df-04733ce92bb2","Type":"ContainerStarted","Data":"bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2"} Jan 20 11:19:46 crc kubenswrapper[4725]: I0120 11:19:46.592197 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" event={"ID":"8b639e20-8ca7-4b37-8271-ada2858140b9","Type":"ContainerStarted","Data":"ced2d1f415e377beee86adaa4f259c6088594f3f0c276206c6dfc2d0002052ef"} Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.567670 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2"] Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.568746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.571415 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-cainjector-dockercfg-l72q6" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.579928 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2"] Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.661882 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hkh9h\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-kube-api-access-hkh9h\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.661991 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.763830 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hkh9h\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-kube-api-access-hkh9h\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.764036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.799830 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-bound-sa-token\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.808968 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hkh9h\" (UniqueName: \"kubernetes.io/projected/62554d79-c9bb-4b40-9153-989791392664-kube-api-access-hkh9h\") pod \"cert-manager-cainjector-855d9ccff4-2m9v2\" (UID: \"62554d79-c9bb-4b40-9153-989791392664\") " pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:47 crc kubenswrapper[4725]: I0120 11:19:47.975846 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" Jan 20 11:19:48 crc kubenswrapper[4725]: I0120 11:19:48.805121 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2"] Jan 20 11:19:48 crc kubenswrapper[4725]: W0120 11:19:48.846815 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod62554d79_c9bb_4b40_9153_989791392664.slice/crio-0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c WatchSource:0}: Error finding container 0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c: Status 404 returned error can't find the container with id 0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c Jan 20 11:19:49 crc kubenswrapper[4725]: I0120 11:19:49.726045 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" event={"ID":"62554d79-c9bb-4b40-9153-989791392664","Type":"ContainerStarted","Data":"0ceff8740ff8cbaa35855800ff7d2913348fb63dcf68d1151b56250cb89b4c4c"} Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.211018 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8pwdf"] Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.212488 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.217475 4725 reflector.go:368] Caches populated for *v1.Secret from object-"cert-manager"/"cert-manager-dockercfg-7fgbw" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.236903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8pwdf"] Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.270213 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-bound-sa-token\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.270295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-96f4c\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-kube-api-access-96f4c\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.372110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-bound-sa-token\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.372197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-96f4c\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-kube-api-access-96f4c\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.405068 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-bound-sa-token\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.413412 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-96f4c\" (UniqueName: \"kubernetes.io/projected/f31ab59c-7288-4ebb-82b4-daa77ec5319c-kube-api-access-96f4c\") pod \"cert-manager-86cb77c54b-8pwdf\" (UID: \"f31ab59c-7288-4ebb-82b4-daa77ec5319c\") " pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.565126 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" Jan 20 11:19:54 crc kubenswrapper[4725]: I0120 11:19:54.914827 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:19:54 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:19:54+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:19:54 crc kubenswrapper[4725]: > Jan 20 11:19:55 crc kubenswrapper[4725]: I0120 11:19:55.110515 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.078205 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.084914 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.091140 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-ca" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.091861 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-global-ca" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.092124 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-2-sys-config" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.133585 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186656 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186722 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186766 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186809 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186837 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186866 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186882 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186931 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186951 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.186999 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292135 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292197 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292239 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292288 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292311 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292330 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292367 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292382 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292423 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292473 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.292752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.293552 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.293959 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.293971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.294270 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.294485 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.295274 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.295345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.295366 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.299223 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.299593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.311705 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"service-telemetry-operator-2-build\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:57 crc kubenswrapper[4725]: I0120 11:19:57.539836 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:19:59 crc kubenswrapper[4725]: I0120 11:19:59.684740 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:19:59 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:19:59+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:19:59 crc kubenswrapper[4725]: > Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.070266 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.071746 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.082933 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.224774 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.225281 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.225389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326227 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326326 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.326934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.327030 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.362689 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"community-operators-zxwx5\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:02 crc kubenswrapper[4725]: I0120 11:20:02.396285 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:04 crc kubenswrapper[4725]: I0120 11:20:04.682877 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:20:04 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:20:04+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:20:04 crc kubenswrapper[4725]: > Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.349141 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df" Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.350171 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:cert-manager-webhook,Image:registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df,Command:[/app/cmd/webhook/webhook],Args:[--dynamic-serving-ca-secret-name=cert-manager-webhook-ca --dynamic-serving-ca-secret-namespace=$(POD_NAMESPACE) --dynamic-serving-dns-names=cert-manager-webhook,cert-manager-webhook.$(POD_NAMESPACE),cert-manager-webhook.$(POD_NAMESPACE).svc --secure-port=10250 --v=2],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:10250,Protocol:TCP,HostIP:,},ContainerPort{Name:healthcheck,HostPort:0,ContainerPort:6080,Protocol:TCP,HostIP:,},ContainerPort{Name:http-metrics,HostPort:0,ContainerPort:9402,Protocol:TCP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.namespace,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:bound-sa-token,ReadOnly:true,MountPath:/var/run/secrets/openshift/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-bncq9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:60,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/healthz,Port:{1 0 healthcheck},Host:,Scheme:HTTP,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:1,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000690000,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod cert-manager-webhook-f4fb5df64-bxlks_cert-manager(8b639e20-8ca7-4b37-8271-ada2858140b9): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.353410 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" podUID="8b639e20-8ca7-4b37-8271-ada2858140b9" Jan 20 11:20:05 crc kubenswrapper[4725]: E0120 11:20:05.368314 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"cert-manager-webhook\" with ImagePullBackOff: \"Back-off pulling image \\\"registry.redhat.io/cert-manager/jetstack-cert-manager-rhel9@sha256:29a0fa1c2f2a6cee62a0468a3883d16d491b4af29130dad6e3e2bb2948f274df\\\"\"" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" podUID="8b639e20-8ca7-4b37-8271-ada2858140b9" Jan 20 11:20:09 crc kubenswrapper[4725]: I0120 11:20:09.634833 4725 prober.go:107] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6" containerName="elasticsearch" probeResult="failure" output=< Jan 20 11:20:09 crc kubenswrapper[4725]: {"timestamp": "2026-01-20T11:20:09+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 20 11:20:09 crc kubenswrapper[4725]: > Jan 20 11:20:12 crc kubenswrapper[4725]: E0120 11:20:12.923295 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe" Jan 20 11:20:12 crc kubenswrapper[4725]: E0120 11:20:12.924577 4725 kuberuntime_manager.go:1274] "Unhandled Error" err=< Jan 20 11:20:12 crc kubenswrapper[4725]: init container &Container{Name:manage-dockerfile,Image:quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d7a908a23111a624c3fa04dc3105a7a97f48ee60105308bbb6ed42a40d63c2fe,Command:[],Args:[openshift-manage-dockerfile --v=0],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:BUILD,Value:{"kind":"Build","apiVersion":"build.openshift.io/v1","metadata":{"name":"service-telemetry-operator-1","namespace":"service-telemetry","uid":"50b6d2b2-7686-4914-9500-f86942896665","resourceVersion":"34292","generation":1,"creationTimestamp":"2026-01-20T11:19:44Z","labels":{"build":"service-telemetry-operator","buildconfig":"service-telemetry-operator","openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.start-policy":"Serial"},"annotations":{"openshift.io/build-config.name":"service-telemetry-operator","openshift.io/build.number":"1"},"ownerReferences":[{"apiVersion":"build.openshift.io/v1","kind":"BuildConfig","name":"service-telemetry-operator","uid":"b100e8b9-3104-4055-8964-2638b957a434","controller":true}],"managedFields":[{"manager":"openshift-apiserver","operation":"Update","apiVersion":"build.openshift.io/v1","time":"2026-01-20T11:19:44Z","fieldsType":"FieldsV1","fieldsV1":{"f:metadata":{"f:annotations":{".":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.number":{}},"f:labels":{".":{},"f:build":{},"f:buildconfig":{},"f:openshift.io/build-config.name":{},"f:openshift.io/build.start-policy":{}},"f:ownerReferences":{".":{},"k:{\"uid\":\"b100e8b9-3104-4055-8964-2638b957a434\"}":{}}},"f:spec":{"f:output":{"f:to":{}},"f:serviceAccount":{},"f:source":{"f:dockerfile":{},"f:type":{}},"f:strategy":{"f:dockerStrategy":{".":{},"f:from":{}},"f:type":{}},"f:triggeredBy":{}},"f:status":{"f:conditions":{".":{},"k:{\"type\":\"New\"}":{".":{},"f:lastTransitionTime":{},"f:lastUpdateTime":{},"f:status":{},"f:type":{}}},"f:config":{},"f:phase":{}}}}]},"spec":{"serviceAccount":"builder","source":{"type":"Dockerfile","dockerfile":"FROM quay.io/operator-framework/ansible-operator:v1.38.1\n\n# temporarily switch to root user to adjust image layers\nUSER 0\n# Upstream CI builds need the additional EPEL sources for python3-passlib and python3-bcrypt but have no working repos to install epel-release\n# NO_PROXY is undefined in upstream CI builds, but defined (usually blank) during openshift builds (a possibly brittle hack)\nRUN bash -c -- 'if [ \"${NO_PROXY:-__ZZZZZ}\" == \"__ZZZZZ\" ]; then echo \"Applying upstream EPEL hacks\" \u0026\u0026 echo -e \"-----BEGIN PGP PUBLIC KEY BLOCK-----\\nmQINBGE3mOsBEACsU+XwJWDJVkItBaugXhXIIkb9oe+7aadELuVo0kBmc3HXt/Yp\\nCJW9hHEiGZ6z2jwgPqyJjZhCvcAWvgzKcvqE+9i0NItV1rzfxrBe2BtUtZmVcuE6\\n2b+SPfxQ2Hr8llaawRjt8BCFX/ZzM4/1Qk+EzlfTcEcpkMf6wdO7kD6ulBk/tbsW\\nDHX2lNcxszTf+XP9HXHWJlA2xBfP+Dk4gl4DnO2Y1xR0OSywE/QtvEbN5cY94ieu\\nn7CBy29AleMhmbnx9pw3NyxcFIAsEZHJoU4ZW9ulAJ/ogttSyAWeacW7eJGW31/Z\\n39cS+I4KXJgeGRI20RmpqfH0tuT+X5Da59YpjYxkbhSK3HYBVnNPhoJFUc2j5iKy\\nXLgkapu1xRnEJhw05kr4LCbud0NTvfecqSqa+59kuVc+zWmfTnGTYc0PXZ6Oa3rK\\n44UOmE6eAT5zd/ToleDO0VesN+EO7CXfRsm7HWGpABF5wNK3vIEF2uRr2VJMvgqS\\n9eNwhJyOzoca4xFSwCkc6dACGGkV+CqhufdFBhmcAsUotSxe3zmrBjqA0B/nxIvH\\nDVgOAMnVCe+Lmv8T0mFgqZSJdIUdKjnOLu/GRFhjDKIak4jeMBMTYpVnU+HhMHLq\\nuDiZkNEvEEGhBQmZuI8J55F/a6UURnxUwT3piyi3Pmr2IFD7ahBxPzOBCQARAQAB\\ntCdGZWRvcmEgKGVwZWw5KSA8ZXBlbEBmZWRvcmFwcm9qZWN0Lm9yZz6JAk4EEwEI\\nADgWIQT/itE0RZcQbs6BO5GKOHK/MihGfAUCYTeY6wIbDwULCQgHAgYVCgkICwIE\\nFgIDAQIeAQIXgAAKCRCKOHK/MihGfFX/EACBPWv20+ttYu1A5WvtHJPzwbj0U4yF\\n3zTQpBglQ2UfkRpYdipTlT3Ih6j5h2VmgRPtINCc/ZE28adrWpBoeFIS2YAKOCLC\\nnZYtHl2nCoLq1U7FSttUGsZ/t8uGCBgnugTfnIYcmlP1jKKA6RJAclK89evDQX5n\\nR9ZD+Cq3CBMlttvSTCht0qQVlwycedH8iWyYgP/mF0W35BIn7NuuZwWhgR00n/VG\\n4nbKPOzTWbsP45awcmivdrS74P6mL84WfkghipdmcoyVb1B8ZP4Y/Ke0RXOnLhNe\\nCfrXXvuW+Pvg2RTfwRDtehGQPAgXbmLmz2ZkV69RGIr54HJv84NDbqZovRTMr7gL\\n9k3ciCzXCiYQgM8yAyGHV0KEhFSQ1HV7gMnt9UmxbxBE2pGU7vu3CwjYga5DpwU7\\nw5wu1TmM5KgZtZvuWOTDnqDLf0cKoIbW8FeeCOn24elcj32bnQDuF9DPey1mqcvT\\n/yEo/Ushyz6CVYxN8DGgcy2M9JOsnmjDx02h6qgWGWDuKgb9jZrvRedpAQCeemEd\\nfhEs6ihqVxRFl16HxC4EVijybhAL76SsM2nbtIqW1apBQJQpXWtQwwdvgTVpdEtE\\nr4ArVJYX5LrswnWEQMOelugUG6S3ZjMfcyOa/O0364iY73vyVgaYK+2XtT2usMux\\nVL469Kj5m13T6w==\\n=Mjs/\\n-----END PGP PUBLIC KEY BLOCK-----\" \u003e /etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9 \u0026\u0026 echo -e \"[epel]\\nname=Extra Packages for Enterprise Linux 9 - \\$basearch\\nmetalink=https://mirrors.fedoraproject.org/metalink?repo=epel-9\u0026arch=\\$basearch\u0026infra=\\$infra\u0026content=\\$contentdir\\nenabled=1\\ngpgcheck=1\\ngpgkey=file:///etc/pki/rpm-gpg/RPM-GPG-KEY-EPEL-9\" \u003e /etc/yum.repos.d/epel.repo; fi'\n\n# update the base image to allow forward-looking optimistic updates during the testing phase, with the added benefit of helping move closer to passing security scans.\n# -- excludes ansible so it remains at 2.9 tag as shipped with the base image\n# -- installs python3-passlib and python3-bcrypt for oauth-proxy interface\n# -- cleans up the cached data from dnf to keep the image as small as possible\nRUN dnf update -y --exclude=ansible* \u0026\u0026 dnf install -y python3-passlib python3-bcrypt \u0026\u0026 dnf clean all \u0026\u0026 rm -rf /var/cache/dnf\n\nCOPY requirements.yml ${HOME}/requirements.yml\nRUN ansible-galaxy collection install -r ${HOME}/requirements.yml \\\n \u0026\u0026 chmod -R ug+rwx ${HOME}/.ansible\n\n# switch back to user 1001 when running the base image (non-root)\nUSER 1001\n\n# copy in required artifacts for the operator\nCOPY watches.yaml ${HOME}/watches.yaml\nCOPY roles/ ${HOME}/roles/\n"},"strategy":{"type":"Docker","dockerStrategy":{"from":{"kind":"DockerImage","name":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e"},"pullSecret":{"name":"builder-dockercfg-ns4k2"}}},"output":{"to":{"kind":"DockerImage","name":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest"},"pushSecret":{"name":"builder-dockercfg-ns4k2"}},"resources":{},"postCommit":{},"nodeSelector":null,"triggeredBy":[{"message":"Image change","imageChangeBuild":{"imageID":"quay.io/operator-framework/ansible-operator@sha256:9895727b7f66bb88fa4c6afdefc7eecf86e6b7c1293920f866a035da9decc58e","fromRef":{"kind":"ImageStreamTag","name":"ansible-operator:v1.38.1"}}}]},"status":{"phase":"New","outputDockerImageReference":"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-operator:latest","config":{"kind":"BuildConfig","namespace":"service-telemetry","name":"service-telemetry-operator"},"output":{},"conditions":[{"type":"New","status":"True","lastUpdateTime":"2026-01-20T11:19:44Z","lastTransitionTime":"2026-01-20T11:19:44Z"}]}} Jan 20 11:20:12 crc kubenswrapper[4725]: ,ValueFrom:nil,},EnvVar{Name:LANG,Value:C.utf8,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/registries.conf,ValueFrom:nil,},EnvVar{Name:BUILD_REGISTRIES_DIR_PATH,Value:/var/run/configs/openshift.io/build-system/registries.d,ValueFrom:nil,},EnvVar{Name:BUILD_SIGNATURE_POLICY_PATH,Value:/var/run/configs/openshift.io/build-system/policy.json,ValueFrom:nil,},EnvVar{Name:BUILD_STORAGE_CONF_PATH,Value:/var/run/configs/openshift.io/build-system/storage.conf,ValueFrom:nil,},EnvVar{Name:BUILD_BLOBCACHE_DIR,Value:/var/cache/blobs,ValueFrom:nil,},EnvVar{Name:HTTP_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:http_proxy,Value:,ValueFrom:nil,},EnvVar{Name:HTTPS_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:https_proxy,Value:,ValueFrom:nil,},EnvVar{Name:NO_PROXY,Value:,ValueFrom:nil,},EnvVar{Name:no_proxy,Value:,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:buildworkdir,ReadOnly:false,MountPath:/tmp/build,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-system-configs,ReadOnly:true,MountPath:/var/run/configs/openshift.io/build-system,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-proxy-ca-bundles,ReadOnly:false,MountPath:/var/run/configs/openshift.io/pki,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:build-blob-cache,ReadOnly:false,MountPath:/var/cache/blobs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ww4nf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[CHOWN DAC_OVERRIDE],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:nil,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod service-telemetry-operator-1-build_service-telemetry(61bedcc7-14db-4cb4-b3df-04733ce92bb2): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled Jan 20 11:20:12 crc kubenswrapper[4725]: > logger="UnhandledError" Jan 20 11:20:12 crc kubenswrapper[4725]: E0120 11:20:12.925675 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"manage-dockerfile\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/service-telemetry-operator-1-build" podUID="61bedcc7-14db-4cb4-b3df-04733ce92bb2" Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.586157 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" event={"ID":"62554d79-c9bb-4b40-9153-989791392664","Type":"ContainerStarted","Data":"2bf5e4d04ddc60aa888e7534aeb4c84cb529514686c11bb35176038cd25d0012"} Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.657305 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-855d9ccff4-2m9v2" podStartSLOduration=2.49767967 podStartE2EDuration="26.657278802s" podCreationTimestamp="2026-01-20 11:19:47 +0000 UTC" firstStartedPulling="2026-01-20 11:19:48.849695797 +0000 UTC m=+917.058017770" lastFinishedPulling="2026-01-20 11:20:13.009294929 +0000 UTC m=+941.217616902" observedRunningTime="2026-01-20 11:20:13.618537815 +0000 UTC m=+941.826859788" watchObservedRunningTime="2026-01-20 11:20:13.657278802 +0000 UTC m=+941.865600775" Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.855627 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-86cb77c54b-8pwdf"] Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.876406 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:13 crc kubenswrapper[4725]: I0120 11:20:13.890827 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 20 11:20:13 crc kubenswrapper[4725]: W0120 11:20:13.967750 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf31ab59c_7288_4ebb_82b4_daa77ec5319c.slice/crio-43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d WatchSource:0}: Error finding container 43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d: Status 404 returned error can't find the container with id 43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d Jan 20 11:20:13 crc kubenswrapper[4725]: W0120 11:20:13.974559 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod97ae1860_8877_4057_a0b3_75cc22dc085a.slice/crio-201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26 WatchSource:0}: Error finding container 201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26: Status 404 returned error can't find the container with id 201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26 Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.195637 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.275435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.275916 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.275997 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276035 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276131 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276167 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276228 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276261 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276305 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276338 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276364 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.276394 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") pod \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\" (UID: \"61bedcc7-14db-4cb4-b3df-04733ce92bb2\") " Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277093 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277307 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277664 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277726 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277755 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277766 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277882 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.277924 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.278213 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.280989 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.281182 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.281301 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf" (OuterVolumeSpecName: "kube-api-access-ww4nf") pod "61bedcc7-14db-4cb4-b3df-04733ce92bb2" (UID: "61bedcc7-14db-4cb4-b3df-04733ce92bb2"). InnerVolumeSpecName "kube-api-access-ww4nf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378577 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378623 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378668 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378685 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378700 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ww4nf\" (UniqueName: \"kubernetes.io/projected/61bedcc7-14db-4cb4-b3df-04733ce92bb2-kube-api-access-ww4nf\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378712 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378724 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378739 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378750 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/61bedcc7-14db-4cb4-b3df-04733ce92bb2-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378763 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378774 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.378787 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/61bedcc7-14db-4cb4-b3df-04733ce92bb2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.592103 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"bdc0063b36b37d06dc379856cb5b0fadc0c09bbabf1f512be45d0a83560cacb7"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.593025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"61bedcc7-14db-4cb4-b3df-04733ce92bb2","Type":"ContainerDied","Data":"bc9c0c7add9c510f35cce2bdbad090847dfc1e38b570cc2a6f8e27b6e79c3ca2"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.593067 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.594486 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.594531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.596213 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" event={"ID":"f31ab59c-7288-4ebb-82b4-daa77ec5319c","Type":"ContainerStarted","Data":"43fec4dd514db1d7b72d66d05f174e95de58c16af7ccf64b5cdab891c2b1729d"} Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.654780 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.654839 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 20 11:20:14 crc kubenswrapper[4725]: I0120 11:20:14.941125 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61bedcc7-14db-4cb4-b3df-04733ce92bb2" path="/var/lib/kubelet/pods/61bedcc7-14db-4cb4-b3df-04733ce92bb2/volumes" Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.009192 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.604590 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" event={"ID":"f31ab59c-7288-4ebb-82b4-daa77ec5319c","Type":"ContainerStarted","Data":"27a47e4a30c80d1867b3b183dc5f9d11f046145c6c3fa8ee2822bba63ec93501"} Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.609206 4725 generic.go:334] "Generic (PLEG): container finished" podID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerID="f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3" exitCode=0 Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.609253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3"} Jan 20 11:20:15 crc kubenswrapper[4725]: I0120 11:20:15.630805 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-86cb77c54b-8pwdf" podStartSLOduration=21.630779659 podStartE2EDuration="21.630779659s" podCreationTimestamp="2026-01-20 11:19:54 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:20:15.624598545 +0000 UTC m=+943.832920538" watchObservedRunningTime="2026-01-20 11:20:15.630779659 +0000 UTC m=+943.839101642" Jan 20 11:20:16 crc kubenswrapper[4725]: I0120 11:20:16.618571 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6"} Jan 20 11:20:16 crc kubenswrapper[4725]: I0120 11:20:16.621358 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"ee779ff51e27fd1708f83d60b373dec8853b8d3de87c0c17f8dc4fb9cab1a4a0"} Jan 20 11:20:17 crc kubenswrapper[4725]: I0120 11:20:17.632445 4725 generic.go:334] "Generic (PLEG): container finished" podID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerID="52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6" exitCode=0 Jan 20 11:20:17 crc kubenswrapper[4725]: I0120 11:20:17.632549 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6"} Jan 20 11:20:18 crc kubenswrapper[4725]: I0120 11:20:18.647660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerStarted","Data":"cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e"} Jan 20 11:20:18 crc kubenswrapper[4725]: I0120 11:20:18.700326 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zxwx5" podStartSLOduration=13.865715752 podStartE2EDuration="16.70030341s" podCreationTimestamp="2026-01-20 11:20:02 +0000 UTC" firstStartedPulling="2026-01-20 11:20:15.610800831 +0000 UTC m=+943.819122804" lastFinishedPulling="2026-01-20 11:20:18.445388489 +0000 UTC m=+946.653710462" observedRunningTime="2026-01-20 11:20:18.673875019 +0000 UTC m=+946.882196992" watchObservedRunningTime="2026-01-20 11:20:18.70030341 +0000 UTC m=+946.908625383" Jan 20 11:20:19 crc kubenswrapper[4725]: I0120 11:20:19.654703 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" event={"ID":"8b639e20-8ca7-4b37-8271-ada2858140b9","Type":"ContainerStarted","Data":"2add89d30e143f744c1cb17591bebb1eb8eea1f2bd242850edb4af57b7d84569"} Jan 20 11:20:19 crc kubenswrapper[4725]: I0120 11:20:19.655726 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:20:19 crc kubenswrapper[4725]: I0120 11:20:19.743822 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" podStartSLOduration=-9223372001.110983 podStartE2EDuration="35.743792101s" podCreationTimestamp="2026-01-20 11:19:44 +0000 UTC" firstStartedPulling="2026-01-20 11:19:46.138567808 +0000 UTC m=+914.346889781" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:20:19.742097587 +0000 UTC m=+947.950419570" watchObservedRunningTime="2026-01-20 11:20:19.743792101 +0000 UTC m=+947.952114074" Jan 20 11:20:22 crc kubenswrapper[4725]: I0120 11:20:22.396457 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:22 crc kubenswrapper[4725]: I0120 11:20:22.396834 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:22 crc kubenswrapper[4725]: I0120 11:20:22.452851 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:24 crc kubenswrapper[4725]: I0120 11:20:24.688275 4725 generic.go:334] "Generic (PLEG): container finished" podID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerID="ee779ff51e27fd1708f83d60b373dec8853b8d3de87c0c17f8dc4fb9cab1a4a0" exitCode=0 Jan 20 11:20:24 crc kubenswrapper[4725]: I0120 11:20:24.688348 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"ee779ff51e27fd1708f83d60b373dec8853b8d3de87c0c17f8dc4fb9cab1a4a0"} Jan 20 11:20:25 crc kubenswrapper[4725]: I0120 11:20:25.334254 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-f4fb5df64-bxlks" Jan 20 11:20:25 crc kubenswrapper[4725]: I0120 11:20:25.696268 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"43ba998c4f90eecc50d85d25e6ba1a6776cac8c3c2f9a35a8e03f8ec2c0f026b"} Jan 20 11:20:25 crc kubenswrapper[4725]: I0120 11:20:25.736002 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/manage-dockerfile/0.log" Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.707048 4725 generic.go:334] "Generic (PLEG): container finished" podID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerID="43ba998c4f90eecc50d85d25e6ba1a6776cac8c3c2f9a35a8e03f8ec2c0f026b" exitCode=0 Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.707127 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"43ba998c4f90eecc50d85d25e6ba1a6776cac8c3c2f9a35a8e03f8ec2c0f026b"} Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.707513 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerStarted","Data":"20c11d3a65716216d93d71b1783145f096f25894367021ae7494d22cc9d152e7"} Jan 20 11:20:26 crc kubenswrapper[4725]: I0120 11:20:26.735893 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=28.682577106 podStartE2EDuration="29.735873696s" podCreationTimestamp="2026-01-20 11:19:57 +0000 UTC" firstStartedPulling="2026-01-20 11:20:13.986677763 +0000 UTC m=+942.194999736" lastFinishedPulling="2026-01-20 11:20:15.039974353 +0000 UTC m=+943.248296326" observedRunningTime="2026-01-20 11:20:26.733547503 +0000 UTC m=+954.941869486" watchObservedRunningTime="2026-01-20 11:20:26.735873696 +0000 UTC m=+954.944195689" Jan 20 11:20:32 crc kubenswrapper[4725]: I0120 11:20:32.564656 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:32 crc kubenswrapper[4725]: I0120 11:20:32.621348 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:33 crc kubenswrapper[4725]: I0120 11:20:33.016009 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zxwx5" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" containerID="cri-o://cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e" gracePeriod=2 Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.026025 4725 generic.go:334] "Generic (PLEG): container finished" podID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerID="cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e" exitCode=0 Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.026129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e"} Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.379442 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.382774 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") pod \"97ae1860-8877-4057-a0b3-75cc22dc085a\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.382833 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") pod \"97ae1860-8877-4057-a0b3-75cc22dc085a\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.387510 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") pod \"97ae1860-8877-4057-a0b3-75cc22dc085a\" (UID: \"97ae1860-8877-4057-a0b3-75cc22dc085a\") " Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.388950 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities" (OuterVolumeSpecName: "utilities") pod "97ae1860-8877-4057-a0b3-75cc22dc085a" (UID: "97ae1860-8877-4057-a0b3-75cc22dc085a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.393777 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj" (OuterVolumeSpecName: "kube-api-access-757jj") pod "97ae1860-8877-4057-a0b3-75cc22dc085a" (UID: "97ae1860-8877-4057-a0b3-75cc22dc085a"). InnerVolumeSpecName "kube-api-access-757jj". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.452924 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "97ae1860-8877-4057-a0b3-75cc22dc085a" (UID: "97ae1860-8877-4057-a0b3-75cc22dc085a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.489873 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-757jj\" (UniqueName: \"kubernetes.io/projected/97ae1860-8877-4057-a0b3-75cc22dc085a-kube-api-access-757jj\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.489918 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:34 crc kubenswrapper[4725]: I0120 11:20:34.489928 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/97ae1860-8877-4057-a0b3-75cc22dc085a-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.474955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zxwx5" event={"ID":"97ae1860-8877-4057-a0b3-75cc22dc085a","Type":"ContainerDied","Data":"201bcb06bb9a0beb9ac1fd2b547b80e9dbfd651db405d0308c003374f9fcac26"} Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.475037 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zxwx5" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.476238 4725 scope.go:117] "RemoveContainer" containerID="cf78224c3b14c5e02f3c08accd80c5248425db36f3d3323b9e50901a9ba2911e" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.513262 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.518143 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zxwx5"] Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.697864 4725 scope.go:117] "RemoveContainer" containerID="52450051a6d1549d1549870814b5b9acdd22930bac4cef7a134b62bfd082cec6" Jan 20 11:20:35 crc kubenswrapper[4725]: I0120 11:20:35.721096 4725 scope.go:117] "RemoveContainer" containerID="f10b678b8ec4eea4ea61b8359a1d7d3bd1dd7ebbe07e6f62000e317148f969c3" Jan 20 11:20:36 crc kubenswrapper[4725]: I0120 11:20:36.943541 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" path="/var/lib/kubelet/pods/97ae1860-8877-4057-a0b3-75cc22dc085a/volumes" Jan 20 11:20:56 crc kubenswrapper[4725]: I0120 11:20:56.728308 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:20:56 crc kubenswrapper[4725]: I0120 11:20:56.729025 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:21:26 crc kubenswrapper[4725]: I0120 11:21:26.728193 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:21:26 crc kubenswrapper[4725]: I0120 11:21:26.728765 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.727692 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.728484 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.728583 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.729591 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:21:56 crc kubenswrapper[4725]: I0120 11:21:56.729700 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946" gracePeriod=600 Jan 20 11:21:57 crc kubenswrapper[4725]: I0120 11:21:57.848184 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946" exitCode=0 Jan 20 11:21:57 crc kubenswrapper[4725]: I0120 11:21:57.848255 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946"} Jan 20 11:21:57 crc kubenswrapper[4725]: I0120 11:21:57.848527 4725 scope.go:117] "RemoveContainer" containerID="f824083bcf978a042383462398bd5ed39ef803d17307d0e1c02d7c37c541d2e2" Jan 20 11:21:58 crc kubenswrapper[4725]: I0120 11:21:58.857873 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e"} Jan 20 11:22:15 crc kubenswrapper[4725]: I0120 11:22:15.979185 4725 generic.go:334] "Generic (PLEG): container finished" podID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerID="20c11d3a65716216d93d71b1783145f096f25894367021ae7494d22cc9d152e7" exitCode=0 Jan 20 11:22:15 crc kubenswrapper[4725]: I0120 11:22:15.979370 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"20c11d3a65716216d93d71b1783145f096f25894367021ae7494d22cc9d152e7"} Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.269944 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382427 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382609 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382657 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382681 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382704 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382720 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382743 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382776 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382775 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382814 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382911 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.382969 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.383027 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") pod \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\" (UID: \"60ad3a7d-367d-4604-a9d4-c6e3baf344ac\") " Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.383043 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384612 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384758 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384810 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.384826 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.385003 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.385168 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.385938 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.397294 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.397358 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.397371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx" (OuterVolumeSpecName: "kube-api-access-wvkdx") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "kube-api-access-wvkdx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.425335 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486024 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486071 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486102 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486115 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wvkdx\" (UniqueName: \"kubernetes.io/projected/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-kube-api-access-wvkdx\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486126 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486138 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.486149 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.566871 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:17 crc kubenswrapper[4725]: I0120 11:22:17.587997 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:18 crc kubenswrapper[4725]: I0120 11:22:18.003513 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"60ad3a7d-367d-4604-a9d4-c6e3baf344ac","Type":"ContainerDied","Data":"bdc0063b36b37d06dc379856cb5b0fadc0c09bbabf1f512be45d0a83560cacb7"} Jan 20 11:22:18 crc kubenswrapper[4725]: I0120 11:22:18.003669 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 20 11:22:18 crc kubenswrapper[4725]: I0120 11:22:18.003873 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bdc0063b36b37d06dc379856cb5b0fadc0c09bbabf1f512be45d0a83560cacb7" Jan 20 11:22:19 crc kubenswrapper[4725]: I0120 11:22:19.932508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "60ad3a7d-367d-4604-a9d4-c6e3baf344ac" (UID: "60ad3a7d-367d-4604-a9d4-c6e3baf344ac"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:20 crc kubenswrapper[4725]: I0120 11:22:20.025999 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/60ad3a7d-367d-4604-a9d4-c6e3baf344ac-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268523 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268895 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-utilities" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268913 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-utilities" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268950 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="docker-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268959 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="docker-build" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268974 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.268984 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.268998 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-content" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269005 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="extract-content" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.269019 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="git-clone" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269026 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="git-clone" Jan 20 11:22:22 crc kubenswrapper[4725]: E0120 11:22:22.269039 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="manage-dockerfile" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269046 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="manage-dockerfile" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269246 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="97ae1860-8877-4057-a0b3-75cc22dc085a" containerName="registry-server" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.269283 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="60ad3a7d-367d-4604-a9d4-c6e3baf344ac" containerName="docker-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.270256 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.272551 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-ca" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.275331 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-global-ca" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.275981 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-1-sys-config" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.277517 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.290785 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460117 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460199 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460364 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460397 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460434 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460480 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460566 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460597 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460625 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460743 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.460888 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562234 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562323 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562345 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562373 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562400 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562431 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562449 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562475 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562498 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562515 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562534 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562541 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.562692 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563238 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563349 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.563655 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.564135 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.564184 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.564670 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.568690 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.568784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.581020 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"smart-gateway-operator-1-build\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.587964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:22 crc kubenswrapper[4725]: I0120 11:22:22.816634 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:23 crc kubenswrapper[4725]: I0120 11:22:23.048716 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerStarted","Data":"e143ed9303cc23ae40660343e336bf7f7112b03b8ad95715a6c91ca243263bfc"} Jan 20 11:22:24 crc kubenswrapper[4725]: I0120 11:22:24.059494 4725 generic.go:334] "Generic (PLEG): container finished" podID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerID="135099adb6c1ba00def83244671f6d756db543de8887db638c4aa7e04a4e4320" exitCode=0 Jan 20 11:22:24 crc kubenswrapper[4725]: I0120 11:22:24.059582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerDied","Data":"135099adb6c1ba00def83244671f6d756db543de8887db638c4aa7e04a4e4320"} Jan 20 11:22:25 crc kubenswrapper[4725]: I0120 11:22:25.068991 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerStarted","Data":"97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a"} Jan 20 11:22:25 crc kubenswrapper[4725]: I0120 11:22:25.102207 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=3.10218538 podStartE2EDuration="3.10218538s" podCreationTimestamp="2026-01-20 11:22:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:22:25.097334598 +0000 UTC m=+1073.305656581" watchObservedRunningTime="2026-01-20 11:22:25.10218538 +0000 UTC m=+1073.310507353" Jan 20 11:22:33 crc kubenswrapper[4725]: I0120 11:22:33.001163 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:33 crc kubenswrapper[4725]: I0120 11:22:33.002166 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" containerID="cri-o://97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a" gracePeriod=30 Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.596185 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.597874 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.599591 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.599748 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.599856 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600033 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600189 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600517 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600807 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600953 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601072 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.600841 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-sys-config" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601550 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601581 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-global-ca" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.601621 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-2-ca" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.638960 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.703517 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.703906 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704269 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704415 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704574 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704364 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.704762 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.705555 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.705717 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.706371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.706733 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.706805 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707061 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707228 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707123 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.707722 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.708053 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.708163 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.710843 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.718884 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.725279 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"smart-gateway-operator-2-build\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:34 crc kubenswrapper[4725]: I0120 11:22:34.920205 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:22:35 crc kubenswrapper[4725]: I0120 11:22:35.215281 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 20 11:22:36 crc kubenswrapper[4725]: I0120 11:22:36.150967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerStarted","Data":"98181af6e77e8ee77db38b4f0c99449b03d0506d97628efdcf708eefa0be2fbf"} Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.929813 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_773de81e-167f-4cc1-b0b2-6f97183bc92d/docker-build/0.log" Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.933791 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.993537 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_773de81e-167f-4cc1-b0b2-6f97183bc92d/docker-build/0.log" Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.994364 4725 generic.go:334] "Generic (PLEG): container finished" podID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerID="97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a" exitCode=1 Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.994408 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerDied","Data":"97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a"} Jan 20 11:22:38 crc kubenswrapper[4725]: I0120 11:22:38.994642 4725 scope.go:117] "RemoveContainer" containerID="97723ba68356e746392c9f83e3eacd3d84034b3d22936c905410f195432e1a2a" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.049371 4725 scope.go:117] "RemoveContainer" containerID="135099adb6c1ba00def83244671f6d756db543de8887db638c4aa7e04a4e4320" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090056 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090224 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090330 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090431 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090523 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090605 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090663 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090694 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090742 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090780 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.090862 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") pod \"773de81e-167f-4cc1-b0b2-6f97183bc92d\" (UID: \"773de81e-167f-4cc1-b0b2-6f97183bc92d\") " Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.091694 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.092248 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.093561 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.093662 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.094005 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.094860 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.095424 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.095614 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.108469 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.108852 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l" (OuterVolumeSpecName: "kube-api-access-fd25l") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "kube-api-access-fd25l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193334 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193645 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193656 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fd25l\" (UniqueName: \"kubernetes.io/projected/773de81e-167f-4cc1-b0b2-6f97183bc92d-kube-api-access-fd25l\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193665 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193674 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193683 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193691 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193701 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/773de81e-167f-4cc1-b0b2-6f97183bc92d-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193709 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.193721 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/773de81e-167f-4cc1-b0b2-6f97183bc92d-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.233073 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.295274 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.482247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "773de81e-167f-4cc1-b0b2-6f97183bc92d" (UID: "773de81e-167f-4cc1-b0b2-6f97183bc92d"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:22:39 crc kubenswrapper[4725]: I0120 11:22:39.497958 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/773de81e-167f-4cc1-b0b2-6f97183bc92d-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.003782 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerStarted","Data":"81fe7e7689ade9572879bd6f7042234d45798c3c4c7d5639d8337cc6cf420f3f"} Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.004917 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"773de81e-167f-4cc1-b0b2-6f97183bc92d","Type":"ContainerDied","Data":"e143ed9303cc23ae40660343e336bf7f7112b03b8ad95715a6c91ca243263bfc"} Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.004968 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.072144 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.081370 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 20 11:22:40 crc kubenswrapper[4725]: I0120 11:22:40.940908 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" path="/var/lib/kubelet/pods/773de81e-167f-4cc1-b0b2-6f97183bc92d/volumes" Jan 20 11:22:41 crc kubenswrapper[4725]: I0120 11:22:41.013841 4725 generic.go:334] "Generic (PLEG): container finished" podID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerID="81fe7e7689ade9572879bd6f7042234d45798c3c4c7d5639d8337cc6cf420f3f" exitCode=0 Jan 20 11:22:41 crc kubenswrapper[4725]: I0120 11:22:41.013972 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"81fe7e7689ade9572879bd6f7042234d45798c3c4c7d5639d8337cc6cf420f3f"} Jan 20 11:22:42 crc kubenswrapper[4725]: I0120 11:22:42.026167 4725 generic.go:334] "Generic (PLEG): container finished" podID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerID="4e2384deeb121a865456908896ebca254f51bb793e35d64ef64a12bdeadadd7a" exitCode=0 Jan 20 11:22:42 crc kubenswrapper[4725]: I0120 11:22:42.026262 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"4e2384deeb121a865456908896ebca254f51bb793e35d64ef64a12bdeadadd7a"} Jan 20 11:22:42 crc kubenswrapper[4725]: I0120 11:22:42.077993 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/manage-dockerfile/0.log" Jan 20 11:22:43 crc kubenswrapper[4725]: I0120 11:22:43.037273 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerStarted","Data":"8fcccce2002a99e8d036bd2beffff0773e9c3730f24376f47ce2a54c9456a0d8"} Jan 20 11:22:43 crc kubenswrapper[4725]: I0120 11:22:43.067550 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=9.067534198 podStartE2EDuration="9.067534198s" podCreationTimestamp="2026-01-20 11:22:34 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:22:43.06503686 +0000 UTC m=+1091.273358853" watchObservedRunningTime="2026-01-20 11:22:43.067534198 +0000 UTC m=+1091.275856171" Jan 20 11:23:59 crc kubenswrapper[4725]: I0120 11:23:59.767723 4725 generic.go:334] "Generic (PLEG): container finished" podID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerID="8fcccce2002a99e8d036bd2beffff0773e9c3730f24376f47ce2a54c9456a0d8" exitCode=0 Jan 20 11:23:59 crc kubenswrapper[4725]: I0120 11:23:59.767797 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"8fcccce2002a99e8d036bd2beffff0773e9c3730f24376f47ce2a54c9456a0d8"} Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.167896 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318451 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318667 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318632 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318717 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318764 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318804 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318859 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318928 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318961 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.318991 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319042 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319068 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") pod \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\" (UID: \"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7\") " Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319328 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.319060 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.320529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.322770 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.322957 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.323468 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.323924 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.326842 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.329275 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.330630 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c" (OuterVolumeSpecName: "kube-api-access-mtb6c") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "kube-api-access-mtb6c". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420461 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420509 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420520 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420532 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420542 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420553 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420563 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mtb6c\" (UniqueName: \"kubernetes.io/projected/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-kube-api-access-mtb6c\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420572 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.420587 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.527639 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:01 crc kubenswrapper[4725]: I0120 11:24:01.623162 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:02 crc kubenswrapper[4725]: I0120 11:24:02.026841 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7","Type":"ContainerDied","Data":"98181af6e77e8ee77db38b4f0c99449b03d0506d97628efdcf708eefa0be2fbf"} Jan 20 11:24:02 crc kubenswrapper[4725]: I0120 11:24:02.026901 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="98181af6e77e8ee77db38b4f0c99449b03d0506d97628efdcf708eefa0be2fbf" Jan 20 11:24:02 crc kubenswrapper[4725]: I0120 11:24:02.027051 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 20 11:24:03 crc kubenswrapper[4725]: I0120 11:24:03.493707 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" (UID: "de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:03 crc kubenswrapper[4725]: I0120 11:24:03.544345 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.207953 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210737 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210804 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210836 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210844 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210860 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210868 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="manage-dockerfile" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210884 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210892 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: E0120 11:24:06.210902 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="git-clone" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.210908 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="git-clone" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.211112 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.211136 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="773de81e-167f-4cc1-b0b2-6f97183bc92d" containerName="docker-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.212281 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.214686 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.215679 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-sys-config" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.216221 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-ca" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.216836 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-1-global-ca" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.241709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.396970 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397048 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397122 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397229 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397256 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397330 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397368 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397446 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397492 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397516 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.397562 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499714 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499788 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499845 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499875 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499905 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.499925 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500110 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500240 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500561 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500728 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.500735 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.501162 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.501466 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.501557 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.524695 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.833059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.833072 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.835810 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.836483 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"sg-core-1-build\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " pod="service-telemetry/sg-core-1-build" Jan 20 11:24:06 crc kubenswrapper[4725]: I0120 11:24:06.838139 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:07 crc kubenswrapper[4725]: I0120 11:24:07.117059 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:08 crc kubenswrapper[4725]: I0120 11:24:08.077291 4725 generic.go:334] "Generic (PLEG): container finished" podID="182d8f8c-6787-460f-8886-13e082da325a" containerID="c4f2e6c9a2af8b906bd1ba4f2529ffa261f97bfacfd90048175544cbe8a4306b" exitCode=0 Jan 20 11:24:08 crc kubenswrapper[4725]: I0120 11:24:08.077353 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerDied","Data":"c4f2e6c9a2af8b906bd1ba4f2529ffa261f97bfacfd90048175544cbe8a4306b"} Jan 20 11:24:08 crc kubenswrapper[4725]: I0120 11:24:08.077798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerStarted","Data":"702a14aac73a2067eb1d2ba924037c10061638d34d12490a8dd8993d2df2b036"} Jan 20 11:24:09 crc kubenswrapper[4725]: I0120 11:24:09.100913 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerStarted","Data":"6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee"} Jan 20 11:24:16 crc kubenswrapper[4725]: I0120 11:24:16.436121 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=10.436098051 podStartE2EDuration="10.436098051s" podCreationTimestamp="2026-01-20 11:24:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:24:09.134784481 +0000 UTC m=+1177.343106464" watchObservedRunningTime="2026-01-20 11:24:16.436098051 +0000 UTC m=+1184.644420024" Jan 20 11:24:16 crc kubenswrapper[4725]: I0120 11:24:16.437365 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:16 crc kubenswrapper[4725]: I0120 11:24:16.437672 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" containerID="cri-o://6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee" gracePeriod=30 Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.139796 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.141669 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.148433 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-sys-config" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.149498 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-global-ca" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.149630 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-core-2-ca" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.161981 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.170700 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_182d8f8c-6787-460f-8886-13e082da325a/docker-build/0.log" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.171659 4725 generic.go:334] "Generic (PLEG): container finished" podID="182d8f8c-6787-460f-8886-13e082da325a" containerID="6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee" exitCode=1 Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.171765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerDied","Data":"6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee"} Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316295 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316327 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316362 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316381 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316471 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316564 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316591 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316632 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316689 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.316768 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418595 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418626 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418674 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418774 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.418887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.419889 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.419928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420025 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420114 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420209 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420320 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420375 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420375 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420421 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420804 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420887 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.420986 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.431627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.440504 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.440595 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"sg-core-2-build\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.460272 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.739047 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_182d8f8c-6787-460f-8886-13e082da325a/docker-build/0.log" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.739703 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826469 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826507 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826529 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826591 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826610 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826728 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826763 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826786 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826811 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826783 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826839 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826990 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") pod \"182d8f8c-6787-460f-8886-13e082da325a\" (UID: \"182d8f8c-6787-460f-8886-13e082da325a\") " Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.826719 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.827625 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.827648 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/182d8f8c-6787-460f-8886-13e082da325a-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.827855 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828118 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828157 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828880 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.828928 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.831205 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5" (OuterVolumeSpecName: "kube-api-access-j8wb5") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "kube-api-access-j8wb5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.831208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.832013 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.922890 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929325 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929356 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929370 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929380 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929389 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929400 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929409 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/182d8f8c-6787-460f-8886-13e082da325a-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929418 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j8wb5\" (UniqueName: \"kubernetes.io/projected/182d8f8c-6787-460f-8886-13e082da325a-kube-api-access-j8wb5\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.929426 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/182d8f8c-6787-460f-8886-13e082da325a-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.966645 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "182d8f8c-6787-460f-8886-13e082da325a" (UID: "182d8f8c-6787-460f-8886-13e082da325a"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:24:18 crc kubenswrapper[4725]: I0120 11:24:18.972891 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.031708 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/182d8f8c-6787-460f-8886-13e082da325a-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.181497 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_182d8f8c-6787-460f-8886-13e082da325a/docker-build/0.log" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.182129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"182d8f8c-6787-460f-8886-13e082da325a","Type":"ContainerDied","Data":"702a14aac73a2067eb1d2ba924037c10061638d34d12490a8dd8993d2df2b036"} Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.182176 4725 scope.go:117] "RemoveContainer" containerID="6910602968e4bebfe7fdf222aea020a439b1e739664b65c1cb39d5ad08c283ee" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.182306 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.186333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerStarted","Data":"bd507f0738d2c0694eccbbc95fb5272e4409e25b64f458f756e9a1b54394396a"} Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.239301 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.243679 4725 scope.go:117] "RemoveContainer" containerID="c4f2e6c9a2af8b906bd1ba4f2529ffa261f97bfacfd90048175544cbe8a4306b" Jan 20 11:24:19 crc kubenswrapper[4725]: I0120 11:24:19.245059 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 20 11:24:20 crc kubenswrapper[4725]: I0120 11:24:20.194041 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerStarted","Data":"d3ff8338b376ac72548be35879a4a833227c1231f0fa7c77e46446ef53b15d94"} Jan 20 11:24:20 crc kubenswrapper[4725]: I0120 11:24:20.944956 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="182d8f8c-6787-460f-8886-13e082da325a" path="/var/lib/kubelet/pods/182d8f8c-6787-460f-8886-13e082da325a/volumes" Jan 20 11:24:21 crc kubenswrapper[4725]: I0120 11:24:21.205421 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerID="d3ff8338b376ac72548be35879a4a833227c1231f0fa7c77e46446ef53b15d94" exitCode=0 Jan 20 11:24:21 crc kubenswrapper[4725]: I0120 11:24:21.205469 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"d3ff8338b376ac72548be35879a4a833227c1231f0fa7c77e46446ef53b15d94"} Jan 20 11:24:22 crc kubenswrapper[4725]: I0120 11:24:22.215941 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerID="032caed499b46e9aa411fe435c34a0b25328813786d4b4a1fa4195b3137ed331" exitCode=0 Jan 20 11:24:22 crc kubenswrapper[4725]: I0120 11:24:22.216386 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"032caed499b46e9aa411fe435c34a0b25328813786d4b4a1fa4195b3137ed331"} Jan 20 11:24:22 crc kubenswrapper[4725]: I0120 11:24:22.257493 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/manage-dockerfile/0.log" Jan 20 11:24:23 crc kubenswrapper[4725]: I0120 11:24:23.233260 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerStarted","Data":"fc80fd4244af16703439fe94645efe3c29505a7b5b8bb53579030c06197a023e"} Jan 20 11:24:23 crc kubenswrapper[4725]: I0120 11:24:23.272552 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.272509208 podStartE2EDuration="5.272509208s" podCreationTimestamp="2026-01-20 11:24:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:24:23.265165257 +0000 UTC m=+1191.473487250" watchObservedRunningTime="2026-01-20 11:24:23.272509208 +0000 UTC m=+1191.480831181" Jan 20 11:24:26 crc kubenswrapper[4725]: I0120 11:24:26.727935 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:24:26 crc kubenswrapper[4725]: I0120 11:24:26.728438 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:24:56 crc kubenswrapper[4725]: I0120 11:24:56.727779 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:24:56 crc kubenswrapper[4725]: I0120 11:24:56.728801 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.728296 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.729194 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.729271 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.730223 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:25:26 crc kubenswrapper[4725]: I0120 11:25:26.730289 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e" gracePeriod=600 Jan 20 11:25:27 crc kubenswrapper[4725]: I0120 11:25:27.439497 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e" exitCode=0 Jan 20 11:25:27 crc kubenswrapper[4725]: I0120 11:25:27.439713 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e"} Jan 20 11:25:27 crc kubenswrapper[4725]: I0120 11:25:27.440058 4725 scope.go:117] "RemoveContainer" containerID="617ea79dfea330e669f8b0c629d26e31c927c0e6f932d5db78fa3a4c4d666946" Jan 20 11:25:28 crc kubenswrapper[4725]: I0120 11:25:28.450772 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3"} Jan 20 11:27:42 crc kubenswrapper[4725]: I0120 11:27:42.061506 4725 generic.go:334] "Generic (PLEG): container finished" podID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerID="fc80fd4244af16703439fe94645efe3c29505a7b5b8bb53579030c06197a023e" exitCode=0 Jan 20 11:27:42 crc kubenswrapper[4725]: I0120 11:27:42.061524 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"fc80fd4244af16703439fe94645efe3c29505a7b5b8bb53579030c06197a023e"} Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.323732 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.399908 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400039 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400096 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400121 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400156 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400179 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400208 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400254 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400294 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400298 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400344 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400417 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400488 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") pod \"c6289b31-17e1-4470-b65b-20f1454c9faf\" (UID: \"c6289b31-17e1-4470-b65b-20f1454c9faf\") " Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.401445 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.401499 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.401789 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.400895 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.402230 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.402305 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.408274 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.408331 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.408457 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk" (OuterVolumeSpecName: "kube-api-access-tfxhk") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "kube-api-access-tfxhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.413430 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504272 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-tfxhk\" (UniqueName: \"kubernetes.io/projected/c6289b31-17e1-4470-b65b-20f1454c9faf-kube-api-access-tfxhk\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504691 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504773 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504839 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504910 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/c6289b31-17e1-4470-b65b-20f1454c9faf-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.504995 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/c6289b31-17e1-4470-b65b-20f1454c9faf-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.505064 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.505148 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/c6289b31-17e1-4470-b65b-20f1454c9faf-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.505219 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:43 crc kubenswrapper[4725]: I0120 11:27:43.957375 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.014198 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.084331 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"c6289b31-17e1-4470-b65b-20f1454c9faf","Type":"ContainerDied","Data":"bd507f0738d2c0694eccbbc95fb5272e4409e25b64f458f756e9a1b54394396a"} Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.084833 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bd507f0738d2c0694eccbbc95fb5272e4409e25b64f458f756e9a1b54394396a" Jan 20 11:27:44 crc kubenswrapper[4725]: I0120 11:27:44.084475 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 20 11:27:46 crc kubenswrapper[4725]: I0120 11:27:46.003127 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "c6289b31-17e1-4470-b65b-20f1454c9faf" (UID: "c6289b31-17e1-4470-b65b-20f1454c9faf"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:27:46 crc kubenswrapper[4725]: I0120 11:27:46.049276 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/c6289b31-17e1-4470-b65b-20f1454c9faf-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.928919 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930017 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930041 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930098 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930106 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930120 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930128 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="manage-dockerfile" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930138 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930146 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: E0120 11:27:48.930157 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="git-clone" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930163 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="git-clone" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930322 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="182d8f8c-6787-460f-8886-13e082da325a" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.930362 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c6289b31-17e1-4470-b65b-20f1454c9faf" containerName="docker-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.931350 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.937328 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.938284 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-ca" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.938293 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-global-ca" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.944698 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-1-sys-config" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.949166 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996499 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996641 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.996829 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997006 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997114 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997261 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997307 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997333 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997358 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997399 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:48 crc kubenswrapper[4725]: I0120 11:27:48.997492 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.099217 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100500 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100631 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100738 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.100844 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101281 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101374 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.099995 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101535 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101418 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101618 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101659 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101698 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101776 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101844 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.101932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.102192 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.102682 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.103465 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.109153 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.109224 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.123547 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"sg-bridge-1-build\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.257211 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:27:49 crc kubenswrapper[4725]: I0120 11:27:49.530818 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:27:50 crc kubenswrapper[4725]: I0120 11:27:50.130479 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerStarted","Data":"79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb"} Jan 20 11:27:50 crc kubenswrapper[4725]: I0120 11:27:50.130993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerStarted","Data":"ef75771651c7edad9549b94a38308f8a219d2601293a77dd16261018ecc03c5a"} Jan 20 11:27:51 crc kubenswrapper[4725]: I0120 11:27:51.142221 4725 generic.go:334] "Generic (PLEG): container finished" podID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerID="79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb" exitCode=0 Jan 20 11:27:51 crc kubenswrapper[4725]: I0120 11:27:51.142334 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerDied","Data":"79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb"} Jan 20 11:27:52 crc kubenswrapper[4725]: I0120 11:27:52.154129 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerStarted","Data":"c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a"} Jan 20 11:27:52 crc kubenswrapper[4725]: I0120 11:27:52.185116 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=4.185068231 podStartE2EDuration="4.185068231s" podCreationTimestamp="2026-01-20 11:27:48 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:27:52.178064151 +0000 UTC m=+1400.386386144" watchObservedRunningTime="2026-01-20 11:27:52.185068231 +0000 UTC m=+1400.393390204" Jan 20 11:27:56 crc kubenswrapper[4725]: I0120 11:27:56.728293 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:27:56 crc kubenswrapper[4725]: I0120 11:27:56.729203 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.210277 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/docker-build/0.log" Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.211407 4725 generic.go:334] "Generic (PLEG): container finished" podID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerID="c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a" exitCode=1 Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.211475 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerDied","Data":"c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a"} Jan 20 11:27:59 crc kubenswrapper[4725]: I0120 11:27:59.258253 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.474122 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/docker-build/0.log" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.474972 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479891 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479927 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479957 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.479995 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480047 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480067 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480119 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480143 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480177 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480331 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480334 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480381 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480436 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.480486 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") pod \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\" (UID: \"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4\") " Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481305 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481357 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481298 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481318 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481462 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.481627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.482577 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.488204 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.488234 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.488269 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7" (OuterVolumeSpecName: "kube-api-access-xpct7") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "kube-api-access-xpct7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.571025 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.581918 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582360 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582439 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582513 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582626 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582709 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.582774 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xpct7\" (UniqueName: \"kubernetes.io/projected/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-kube-api-access-xpct7\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.583168 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.583230 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.894250 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" (UID: "bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.914146 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 20 11:28:00 crc kubenswrapper[4725]: E0120 11:28:00.915654 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="manage-dockerfile" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.915703 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="manage-dockerfile" Jan 20 11:28:00 crc kubenswrapper[4725]: E0120 11:28:00.915715 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="docker-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.915724 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="docker-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.915902 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" containerName="docker-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.917246 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.922104 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-sys-config" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.923344 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-ca" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.926734 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"sg-bridge-2-global-ca" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.928476 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988236 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988298 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988336 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988467 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988490 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988546 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988574 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988596 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988668 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988692 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:00 crc kubenswrapper[4725]: I0120 11:28:00.988740 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.089256 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.089328 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.089358 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090360 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090443 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090472 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090505 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090532 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090678 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090784 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090861 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090923 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.090989 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091040 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091199 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091337 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091420 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091597 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.091928 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.092584 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.093703 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.094564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.109811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"sg-bridge-2-build\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.229675 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/docker-build/0.log" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.230042 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4","Type":"ContainerDied","Data":"ef75771651c7edad9549b94a38308f8a219d2601293a77dd16261018ecc03c5a"} Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.230108 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ef75771651c7edad9549b94a38308f8a219d2601293a77dd16261018ecc03c5a" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.230179 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.236331 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.257049 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.263422 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 20 11:28:01 crc kubenswrapper[4725]: I0120 11:28:01.494876 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 20 11:28:02 crc kubenswrapper[4725]: I0120 11:28:02.266464 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerStarted","Data":"a4903596b33031d7aed7600a9e2bb86e46e90e8822bbe874f78076489c05a258"} Jan 20 11:28:02 crc kubenswrapper[4725]: I0120 11:28:02.268300 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerStarted","Data":"6780fdbbe3f7a45599b0514328dfab3ade3905ca8a25ac03e4edfbe11fcd11a8"} Jan 20 11:28:02 crc kubenswrapper[4725]: I0120 11:28:02.941356 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4" path="/var/lib/kubelet/pods/bbbf5e8a-fe4e-4933-bcfc-3152cfaa61a4/volumes" Jan 20 11:28:03 crc kubenswrapper[4725]: I0120 11:28:03.280483 4725 generic.go:334] "Generic (PLEG): container finished" podID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerID="a4903596b33031d7aed7600a9e2bb86e46e90e8822bbe874f78076489c05a258" exitCode=0 Jan 20 11:28:03 crc kubenswrapper[4725]: I0120 11:28:03.280660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"a4903596b33031d7aed7600a9e2bb86e46e90e8822bbe874f78076489c05a258"} Jan 20 11:28:04 crc kubenswrapper[4725]: I0120 11:28:04.292441 4725 generic.go:334] "Generic (PLEG): container finished" podID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerID="aebcdd2389cf5555810a810b8ba5ed5db46fceb8094ee87e91d2217e630e31e3" exitCode=0 Jan 20 11:28:04 crc kubenswrapper[4725]: I0120 11:28:04.293025 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"aebcdd2389cf5555810a810b8ba5ed5db46fceb8094ee87e91d2217e630e31e3"} Jan 20 11:28:04 crc kubenswrapper[4725]: I0120 11:28:04.341547 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/manage-dockerfile/0.log" Jan 20 11:28:05 crc kubenswrapper[4725]: I0120 11:28:05.305934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerStarted","Data":"5e42726132cce6cccfbcebe76e994c0bbf095e27ce3388781ab16bb72f1fbb76"} Jan 20 11:28:05 crc kubenswrapper[4725]: I0120 11:28:05.336222 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-2-build" podStartSLOduration=5.336191473 podStartE2EDuration="5.336191473s" podCreationTimestamp="2026-01-20 11:28:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:28:05.330650319 +0000 UTC m=+1413.538972302" watchObservedRunningTime="2026-01-20 11:28:05.336191473 +0000 UTC m=+1413.544513446" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.760645 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.762973 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.789699 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.908525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.908604 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:18 crc kubenswrapper[4725]: I0120 11:28:18.908927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.010480 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.010640 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.010679 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.011229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.011628 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.033291 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"redhat-operators-62jw6\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.087681 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:19 crc kubenswrapper[4725]: I0120 11:28:19.591482 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:19 crc kubenswrapper[4725]: W0120 11:28:19.600419 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podcef150c1_b17c_4f6f_8103_016969a51c8d.slice/crio-6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca WatchSource:0}: Error finding container 6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca: Status 404 returned error can't find the container with id 6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.411563 4725 generic.go:334] "Generic (PLEG): container finished" podID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" exitCode=0 Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.411660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005"} Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.411997 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerStarted","Data":"6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca"} Jan 20 11:28:20 crc kubenswrapper[4725]: I0120 11:28:20.414239 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:28:22 crc kubenswrapper[4725]: I0120 11:28:22.430572 4725 generic.go:334] "Generic (PLEG): container finished" podID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" exitCode=0 Jan 20 11:28:22 crc kubenswrapper[4725]: I0120 11:28:22.430624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47"} Jan 20 11:28:24 crc kubenswrapper[4725]: I0120 11:28:24.450337 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerStarted","Data":"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b"} Jan 20 11:28:24 crc kubenswrapper[4725]: I0120 11:28:24.509198 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-62jw6" podStartSLOduration=3.6270652439999997 podStartE2EDuration="6.50916597s" podCreationTimestamp="2026-01-20 11:28:18 +0000 UTC" firstStartedPulling="2026-01-20 11:28:20.413769366 +0000 UTC m=+1428.622091339" lastFinishedPulling="2026-01-20 11:28:23.295870072 +0000 UTC m=+1431.504192065" observedRunningTime="2026-01-20 11:28:24.505425022 +0000 UTC m=+1432.713747005" watchObservedRunningTime="2026-01-20 11:28:24.50916597 +0000 UTC m=+1432.717487943" Jan 20 11:28:26 crc kubenswrapper[4725]: I0120 11:28:26.728209 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:28:26 crc kubenswrapper[4725]: I0120 11:28:26.728861 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:28:29 crc kubenswrapper[4725]: I0120 11:28:29.088133 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:29 crc kubenswrapper[4725]: I0120 11:28:29.088228 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:30 crc kubenswrapper[4725]: I0120 11:28:30.143598 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-62jw6" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" probeResult="failure" output=< Jan 20 11:28:30 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:28:30 crc kubenswrapper[4725]: > Jan 20 11:28:39 crc kubenswrapper[4725]: I0120 11:28:39.132821 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:39 crc kubenswrapper[4725]: I0120 11:28:39.178410 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:39 crc kubenswrapper[4725]: I0120 11:28:39.378311 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:40 crc kubenswrapper[4725]: I0120 11:28:40.580241 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-62jw6" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" containerID="cri-o://c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" gracePeriod=2 Jan 20 11:28:40 crc kubenswrapper[4725]: I0120 11:28:40.958744 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.088424 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") pod \"cef150c1-b17c-4f6f-8103-016969a51c8d\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.089478 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") pod \"cef150c1-b17c-4f6f-8103-016969a51c8d\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.089585 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") pod \"cef150c1-b17c-4f6f-8103-016969a51c8d\" (UID: \"cef150c1-b17c-4f6f-8103-016969a51c8d\") " Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.091856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities" (OuterVolumeSpecName: "utilities") pod "cef150c1-b17c-4f6f-8103-016969a51c8d" (UID: "cef150c1-b17c-4f6f-8103-016969a51c8d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.108263 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr" (OuterVolumeSpecName: "kube-api-access-wlsjr") pod "cef150c1-b17c-4f6f-8103-016969a51c8d" (UID: "cef150c1-b17c-4f6f-8103-016969a51c8d"). InnerVolumeSpecName "kube-api-access-wlsjr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.192581 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.192631 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wlsjr\" (UniqueName: \"kubernetes.io/projected/cef150c1-b17c-4f6f-8103-016969a51c8d-kube-api-access-wlsjr\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.237309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cef150c1-b17c-4f6f-8103-016969a51c8d" (UID: "cef150c1-b17c-4f6f-8103-016969a51c8d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.294693 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cef150c1-b17c-4f6f-8103-016969a51c8d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591782 4725 generic.go:334] "Generic (PLEG): container finished" podID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" exitCode=0 Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591839 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b"} Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-62jw6" event={"ID":"cef150c1-b17c-4f6f-8103-016969a51c8d","Type":"ContainerDied","Data":"6cbff9fb4de70c7a85d041406c13f9cc25777a12d6d7bd5fbb1f26642bdd57ca"} Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.591903 4725 scope.go:117] "RemoveContainer" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.592056 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-62jw6" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.614692 4725 scope.go:117] "RemoveContainer" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.635001 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.640189 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-62jw6"] Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.650738 4725 scope.go:117] "RemoveContainer" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.668790 4725 scope.go:117] "RemoveContainer" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" Jan 20 11:28:41 crc kubenswrapper[4725]: E0120 11:28:41.669499 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b\": container with ID starting with c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b not found: ID does not exist" containerID="c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.669544 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b"} err="failed to get container status \"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b\": rpc error: code = NotFound desc = could not find container \"c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b\": container with ID starting with c8f1b6331ce812853ba8a0c67beb1a1b27dc4616738cbd2627af52165b530e0b not found: ID does not exist" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.669570 4725 scope.go:117] "RemoveContainer" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" Jan 20 11:28:41 crc kubenswrapper[4725]: E0120 11:28:41.670156 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47\": container with ID starting with afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47 not found: ID does not exist" containerID="afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.670183 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47"} err="failed to get container status \"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47\": rpc error: code = NotFound desc = could not find container \"afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47\": container with ID starting with afd8b9b55b9ef5ee8fdcc31d664fcbaf0a0b3c5d196ce6a09aa4afa6fc6e6b47 not found: ID does not exist" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.670200 4725 scope.go:117] "RemoveContainer" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" Jan 20 11:28:41 crc kubenswrapper[4725]: E0120 11:28:41.670610 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005\": container with ID starting with d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005 not found: ID does not exist" containerID="d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005" Jan 20 11:28:41 crc kubenswrapper[4725]: I0120 11:28:41.670635 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005"} err="failed to get container status \"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005\": rpc error: code = NotFound desc = could not find container \"d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005\": container with ID starting with d746042eaeefcf33c76db8ffb0fa7d4dee76a283d927872450bda3af655c4005 not found: ID does not exist" Jan 20 11:28:42 crc kubenswrapper[4725]: I0120 11:28:42.949970 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" path="/var/lib/kubelet/pods/cef150c1-b17c-4f6f-8103-016969a51c8d/volumes" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.536389 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:28:50 crc kubenswrapper[4725]: E0120 11:28:50.537516 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-content" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537538 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-content" Jan 20 11:28:50 crc kubenswrapper[4725]: E0120 11:28:50.537559 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-utilities" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537567 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="extract-utilities" Jan 20 11:28:50 crc kubenswrapper[4725]: E0120 11:28:50.537580 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537588 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.537822 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="cef150c1-b17c-4f6f-8103-016969a51c8d" containerName="registry-server" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.538952 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.540822 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.540940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.541034 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.562120 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642533 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642592 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.642971 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.643183 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.678297 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"certified-operators-vkfs6\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:50 crc kubenswrapper[4725]: I0120 11:28:50.860284 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.251575 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.675217 4725 generic.go:334] "Generic (PLEG): container finished" podID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" exitCode=0 Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.675307 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e"} Jan 20 11:28:51 crc kubenswrapper[4725]: I0120 11:28:51.675830 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerStarted","Data":"2d4727da80686ae11420fefc3155e2cfb58d10a64c59aa8f6a79ffbd6e6c73e2"} Jan 20 11:28:54 crc kubenswrapper[4725]: I0120 11:28:54.702340 4725 generic.go:334] "Generic (PLEG): container finished" podID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" exitCode=0 Jan 20 11:28:54 crc kubenswrapper[4725]: I0120 11:28:54.703253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959"} Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.728673 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.728787 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.728866 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.729798 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:28:56 crc kubenswrapper[4725]: I0120 11:28:56.729861 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3" gracePeriod=600 Jan 20 11:28:57 crc kubenswrapper[4725]: I0120 11:28:57.727524 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3" exitCode=0 Jan 20 11:28:57 crc kubenswrapper[4725]: I0120 11:28:57.727614 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3"} Jan 20 11:28:57 crc kubenswrapper[4725]: I0120 11:28:57.728130 4725 scope.go:117] "RemoveContainer" containerID="aa33ee4c62e22f867acace91c0f155de88e0f9773671659eb6aa460399ed540e" Jan 20 11:28:58 crc kubenswrapper[4725]: I0120 11:28:58.738919 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f"} Jan 20 11:28:58 crc kubenswrapper[4725]: I0120 11:28:58.741196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerStarted","Data":"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2"} Jan 20 11:28:59 crc kubenswrapper[4725]: I0120 11:28:59.776554 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-vkfs6" podStartSLOduration=4.129314189 podStartE2EDuration="9.776526199s" podCreationTimestamp="2026-01-20 11:28:50 +0000 UTC" firstStartedPulling="2026-01-20 11:28:52.685286767 +0000 UTC m=+1460.893608760" lastFinishedPulling="2026-01-20 11:28:58.332498797 +0000 UTC m=+1466.540820770" observedRunningTime="2026-01-20 11:28:59.771274714 +0000 UTC m=+1467.979596677" watchObservedRunningTime="2026-01-20 11:28:59.776526199 +0000 UTC m=+1467.984848172" Jan 20 11:29:00 crc kubenswrapper[4725]: I0120 11:29:00.861268 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:00 crc kubenswrapper[4725]: I0120 11:29:00.861355 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:00 crc kubenswrapper[4725]: I0120 11:29:00.942603 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:03 crc kubenswrapper[4725]: I0120 11:29:03.797157 4725 generic.go:334] "Generic (PLEG): container finished" podID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerID="5e42726132cce6cccfbcebe76e994c0bbf095e27ce3388781ab16bb72f1fbb76" exitCode=0 Jan 20 11:29:03 crc kubenswrapper[4725]: I0120 11:29:03.797251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"5e42726132cce6cccfbcebe76e994c0bbf095e27ce3388781ab16bb72f1fbb76"} Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.061575 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200220 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200302 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200344 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200365 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200439 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200464 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200493 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200553 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200558 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200591 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200713 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200750 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200786 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.200810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") pod \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\" (UID: \"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286\") " Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201053 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201067 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201662 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.201750 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.203058 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.203586 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.204170 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.207643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.208258 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz" (OuterVolumeSpecName: "kube-api-access-dwbcz") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "kube-api-access-dwbcz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.209260 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302267 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302748 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302831 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302923 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dwbcz\" (UniqueName: \"kubernetes.io/projected/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-kube-api-access-dwbcz\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.302989 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.303108 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.303177 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.303251 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.327966 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.404963 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.817034 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286","Type":"ContainerDied","Data":"6780fdbbe3f7a45599b0514328dfab3ade3905ca8a25ac03e4edfbe11fcd11a8"} Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.817215 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6780fdbbe3f7a45599b0514328dfab3ade3905ca8a25ac03e4edfbe11fcd11a8" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.817219 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 20 11:29:05 crc kubenswrapper[4725]: I0120 11:29:05.984835 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" (UID: "5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:06 crc kubenswrapper[4725]: I0120 11:29:06.013886 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.105626 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:10 crc kubenswrapper[4725]: E0120 11:29:10.106863 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="docker-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.106883 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="docker-build" Jan 20 11:29:10 crc kubenswrapper[4725]: E0120 11:29:10.106906 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="manage-dockerfile" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.106914 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="manage-dockerfile" Jan 20 11:29:10 crc kubenswrapper[4725]: E0120 11:29:10.106924 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="git-clone" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.106932 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="git-clone" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.107098 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286" containerName="docker-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.108072 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.110986 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-ca" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.111256 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.111287 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-global-ca" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.123715 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-1-sys-config" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.125590 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283362 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283445 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283484 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283505 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283525 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283872 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283948 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.283996 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284069 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284115 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284227 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.284279 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385780 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385907 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385947 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.385980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386009 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386008 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386128 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386211 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386247 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386378 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386398 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.386448 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387015 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387319 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387464 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387586 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.387769 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.388457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.399526 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.399857 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.407281 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.427578 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.650538 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.854126 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerStarted","Data":"36a666488ecd6d15d08d3ab59870b43434b273fc6058e7f55ac7c1ecc6d3a04a"} Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.910813 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:10 crc kubenswrapper[4725]: I0120 11:29:10.974986 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:29:11 crc kubenswrapper[4725]: I0120 11:29:11.863729 4725 generic.go:334] "Generic (PLEG): container finished" podID="0db9d434-26af-4738-bb93-05cd9b720c87" containerID="c39d6de3e24d8f3a14c460d9395b3e4c5d0c7f4110899d7ced5dff416dd88a6f" exitCode=0 Jan 20 11:29:11 crc kubenswrapper[4725]: I0120 11:29:11.864429 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-vkfs6" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" containerID="cri-o://dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" gracePeriod=2 Jan 20 11:29:11 crc kubenswrapper[4725]: I0120 11:29:11.863915 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerDied","Data":"c39d6de3e24d8f3a14c460d9395b3e4c5d0c7f4110899d7ced5dff416dd88a6f"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.271263 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.322807 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") pod \"19e454ee-77bb-40ff-a78b-661546d1cc26\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.322888 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") pod \"19e454ee-77bb-40ff-a78b-661546d1cc26\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.323045 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") pod \"19e454ee-77bb-40ff-a78b-661546d1cc26\" (UID: \"19e454ee-77bb-40ff-a78b-661546d1cc26\") " Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.324498 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities" (OuterVolumeSpecName: "utilities") pod "19e454ee-77bb-40ff-a78b-661546d1cc26" (UID: "19e454ee-77bb-40ff-a78b-661546d1cc26"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.332546 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh" (OuterVolumeSpecName: "kube-api-access-stfjh") pod "19e454ee-77bb-40ff-a78b-661546d1cc26" (UID: "19e454ee-77bb-40ff-a78b-661546d1cc26"). InnerVolumeSpecName "kube-api-access-stfjh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.386807 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "19e454ee-77bb-40ff-a78b-661546d1cc26" (UID: "19e454ee-77bb-40ff-a78b-661546d1cc26"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.425490 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.425534 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-stfjh\" (UniqueName: \"kubernetes.io/projected/19e454ee-77bb-40ff-a78b-661546d1cc26-kube-api-access-stfjh\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.425547 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/19e454ee-77bb-40ff-a78b-661546d1cc26-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.879546 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerStarted","Data":"cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.886882 4725 generic.go:334] "Generic (PLEG): container finished" podID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" exitCode=0 Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.886957 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-vkfs6" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.886966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.887070 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-vkfs6" event={"ID":"19e454ee-77bb-40ff-a78b-661546d1cc26","Type":"ContainerDied","Data":"2d4727da80686ae11420fefc3155e2cfb58d10a64c59aa8f6a79ffbd6e6c73e2"} Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.887113 4725 scope.go:117] "RemoveContainer" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.910889 4725 scope.go:117] "RemoveContainer" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.918292 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=2.918270725 podStartE2EDuration="2.918270725s" podCreationTimestamp="2026-01-20 11:29:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:29:12.910690657 +0000 UTC m=+1481.119012650" watchObservedRunningTime="2026-01-20 11:29:12.918270725 +0000 UTC m=+1481.126592688" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.942915 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.944715 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-vkfs6"] Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.956864 4725 scope.go:117] "RemoveContainer" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.977272 4725 scope.go:117] "RemoveContainer" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" Jan 20 11:29:12 crc kubenswrapper[4725]: E0120 11:29:12.977902 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2\": container with ID starting with dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2 not found: ID does not exist" containerID="dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.977950 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2"} err="failed to get container status \"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2\": rpc error: code = NotFound desc = could not find container \"dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2\": container with ID starting with dd054d47ac60c0a05d4f5f487a487dc59c9d8edb55ac724e00a76e2e06771bd2 not found: ID does not exist" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.977991 4725 scope.go:117] "RemoveContainer" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" Jan 20 11:29:12 crc kubenswrapper[4725]: E0120 11:29:12.978370 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959\": container with ID starting with 9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959 not found: ID does not exist" containerID="9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.978425 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959"} err="failed to get container status \"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959\": rpc error: code = NotFound desc = could not find container \"9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959\": container with ID starting with 9fe80860779e71290c302ab4d2ade7e5495476679097aec296b10762eb20d959 not found: ID does not exist" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.978472 4725 scope.go:117] "RemoveContainer" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" Jan 20 11:29:12 crc kubenswrapper[4725]: E0120 11:29:12.979105 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e\": container with ID starting with 697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e not found: ID does not exist" containerID="697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e" Jan 20 11:29:12 crc kubenswrapper[4725]: I0120 11:29:12.979169 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e"} err="failed to get container status \"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e\": rpc error: code = NotFound desc = could not find container \"697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e\": container with ID starting with 697c829883daaf99403d534b91f22a7c51fba27ae733892f60ac1cb76ed67d9e not found: ID does not exist" Jan 20 11:29:14 crc kubenswrapper[4725]: I0120 11:29:14.942028 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" path="/var/lib/kubelet/pods/19e454ee-77bb-40ff-a78b-661546d1cc26/volumes" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.614939 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.615928 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" containerID="cri-o://cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5" gracePeriod=30 Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.948375 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_0db9d434-26af-4738-bb93-05cd9b720c87/docker-build/0.log" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949582 4725 generic.go:334] "Generic (PLEG): container finished" podID="0db9d434-26af-4738-bb93-05cd9b720c87" containerID="cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5" exitCode=1 Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949650 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerDied","Data":"cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5"} Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949690 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"0db9d434-26af-4738-bb93-05cd9b720c87","Type":"ContainerDied","Data":"36a666488ecd6d15d08d3ab59870b43434b273fc6058e7f55ac7c1ecc6d3a04a"} Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.949702 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="36a666488ecd6d15d08d3ab59870b43434b273fc6058e7f55ac7c1ecc6d3a04a" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.984237 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_0db9d434-26af-4738-bb93-05cd9b720c87/docker-build/0.log" Jan 20 11:29:20 crc kubenswrapper[4725]: I0120 11:29:20.984875 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125450 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125524 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125622 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125672 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125731 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125753 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125772 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125797 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125863 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125900 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.125932 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") pod \"0db9d434-26af-4738-bb93-05cd9b720c87\" (UID: \"0db9d434-26af-4738-bb93-05cd9b720c87\") " Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.126047 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.126356 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.128494 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.128992 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129103 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129121 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129197 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/0db9d434-26af-4738-bb93-05cd9b720c87-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129210 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129012 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.129781 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.135559 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.135873 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.136941 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww" (OuterVolumeSpecName: "kube-api-access-k7hww") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "kube-api-access-k7hww". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.212509 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230587 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230822 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230840 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230855 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230891 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/0db9d434-26af-4738-bb93-05cd9b720c87-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230908 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230920 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/0db9d434-26af-4738-bb93-05cd9b720c87-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.230932 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k7hww\" (UniqueName: \"kubernetes.io/projected/0db9d434-26af-4738-bb93-05cd9b720c87-kube-api-access-k7hww\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.523998 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "0db9d434-26af-4738-bb93-05cd9b720c87" (UID: "0db9d434-26af-4738-bb93-05cd9b720c87"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.536319 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/0db9d434-26af-4738-bb93-05cd9b720c87-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:29:21 crc kubenswrapper[4725]: I0120 11:29:21.955799 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.020790 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.033565 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230425 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230780 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-utilities" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230800 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-utilities" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230813 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230821 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230835 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230842 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230863 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-content" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230869 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="extract-content" Jan 20 11:29:22 crc kubenswrapper[4725]: E0120 11:29:22.230883 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="manage-dockerfile" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.230894 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="manage-dockerfile" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.231043 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" containerName="docker-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.231065 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="19e454ee-77bb-40ff-a78b-661546d1cc26" containerName="registry-server" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.232221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.234228 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-sys-config" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.235820 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-global-ca" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.236018 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.238395 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-webhook-snmp-2-ca" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248342 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248415 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248465 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248491 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248519 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248596 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248617 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248674 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248699 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.248719 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.258632 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.349892 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.349960 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.349989 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350015 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350074 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350130 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350151 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350172 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350425 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350481 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350547 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350585 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350593 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350711 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.350811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351046 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351564 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.351697 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.354296 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.355357 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.362934 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.377610 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.550781 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.763903 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.943286 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0db9d434-26af-4738-bb93-05cd9b720c87" path="/var/lib/kubelet/pods/0db9d434-26af-4738-bb93-05cd9b720c87/volumes" Jan 20 11:29:22 crc kubenswrapper[4725]: I0120 11:29:22.986749 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerStarted","Data":"6a9aff2c07fcb35085b065af0d4d52d91283e430a1d195a0198fb4e039bb9494"} Jan 20 11:29:23 crc kubenswrapper[4725]: I0120 11:29:23.997402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerStarted","Data":"4445b1a0e79c8d9eb8c5ea0bd6b3f97b942d1c463c2f1b85d3e880737f51ed91"} Jan 20 11:29:25 crc kubenswrapper[4725]: I0120 11:29:25.007693 4725 generic.go:334] "Generic (PLEG): container finished" podID="851c53a0-c674-49b2-88dc-77da0a70406b" containerID="4445b1a0e79c8d9eb8c5ea0bd6b3f97b942d1c463c2f1b85d3e880737f51ed91" exitCode=0 Jan 20 11:29:25 crc kubenswrapper[4725]: I0120 11:29:25.007780 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"4445b1a0e79c8d9eb8c5ea0bd6b3f97b942d1c463c2f1b85d3e880737f51ed91"} Jan 20 11:29:26 crc kubenswrapper[4725]: I0120 11:29:26.021341 4725 generic.go:334] "Generic (PLEG): container finished" podID="851c53a0-c674-49b2-88dc-77da0a70406b" containerID="6218152dbfd3ab2c2a840223eb50d597295ccfb61dc4dd813ca3437b108d3143" exitCode=0 Jan 20 11:29:26 crc kubenswrapper[4725]: I0120 11:29:26.021469 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"6218152dbfd3ab2c2a840223eb50d597295ccfb61dc4dd813ca3437b108d3143"} Jan 20 11:29:26 crc kubenswrapper[4725]: I0120 11:29:26.077500 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/manage-dockerfile/0.log" Jan 20 11:29:27 crc kubenswrapper[4725]: I0120 11:29:27.034434 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerStarted","Data":"d02a02aca60254ad250ffe6b9525dda6f9b904e95118572ca9b292f09c32136b"} Jan 20 11:29:27 crc kubenswrapper[4725]: I0120 11:29:27.072778 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.072751965 podStartE2EDuration="5.072751965s" podCreationTimestamp="2026-01-20 11:29:22 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:29:27.066467997 +0000 UTC m=+1495.274789970" watchObservedRunningTime="2026-01-20 11:29:27.072751965 +0000 UTC m=+1495.281073938" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.158539 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.160643 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.164907 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.165345 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.175717 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.305196 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.305353 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.305798 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.407207 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.407293 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.407318 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.408290 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.424999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.426595 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"collect-profiles-29481810-txmbc\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.487060 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:00 crc kubenswrapper[4725]: I0120 11:30:00.924312 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 11:30:01 crc kubenswrapper[4725]: I0120 11:30:01.307489 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" event={"ID":"0fdb152c-7b26-4ed6-8bb8-6a846224c67b","Type":"ContainerStarted","Data":"9b0dcdea8536fd69cc550db76c797c2b233941b5ed5fc0345fea4348ff9e28b4"} Jan 20 11:30:02 crc kubenswrapper[4725]: I0120 11:30:02.321282 4725 generic.go:334] "Generic (PLEG): container finished" podID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerID="19fb964594f75fcdba986836c9a966bf2aa65e41d99e7666a933d08acb12b332" exitCode=0 Jan 20 11:30:02 crc kubenswrapper[4725]: I0120 11:30:02.321472 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" event={"ID":"0fdb152c-7b26-4ed6-8bb8-6a846224c67b","Type":"ContainerDied","Data":"19fb964594f75fcdba986836c9a966bf2aa65e41d99e7666a933d08acb12b332"} Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.584916 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.759683 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") pod \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.759798 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") pod \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.759936 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") pod \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\" (UID: \"0fdb152c-7b26-4ed6-8bb8-6a846224c67b\") " Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.761839 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume" (OuterVolumeSpecName: "config-volume") pod "0fdb152c-7b26-4ed6-8bb8-6a846224c67b" (UID: "0fdb152c-7b26-4ed6-8bb8-6a846224c67b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.767985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "0fdb152c-7b26-4ed6-8bb8-6a846224c67b" (UID: "0fdb152c-7b26-4ed6-8bb8-6a846224c67b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.782229 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85" (OuterVolumeSpecName: "kube-api-access-48v85") pod "0fdb152c-7b26-4ed6-8bb8-6a846224c67b" (UID: "0fdb152c-7b26-4ed6-8bb8-6a846224c67b"). InnerVolumeSpecName "kube-api-access-48v85". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.861801 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-48v85\" (UniqueName: \"kubernetes.io/projected/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-kube-api-access-48v85\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.862353 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:03 crc kubenswrapper[4725]: I0120 11:30:03.862367 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/0fdb152c-7b26-4ed6-8bb8-6a846224c67b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:04 crc kubenswrapper[4725]: I0120 11:30:04.340813 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" event={"ID":"0fdb152c-7b26-4ed6-8bb8-6a846224c67b","Type":"ContainerDied","Data":"9b0dcdea8536fd69cc550db76c797c2b233941b5ed5fc0345fea4348ff9e28b4"} Jan 20 11:30:04 crc kubenswrapper[4725]: I0120 11:30:04.340887 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9b0dcdea8536fd69cc550db76c797c2b233941b5ed5fc0345fea4348ff9e28b4" Jan 20 11:30:04 crc kubenswrapper[4725]: I0120 11:30:04.340964 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc" Jan 20 11:30:20 crc kubenswrapper[4725]: I0120 11:30:20.510834 4725 generic.go:334] "Generic (PLEG): container finished" podID="851c53a0-c674-49b2-88dc-77da0a70406b" containerID="d02a02aca60254ad250ffe6b9525dda6f9b904e95118572ca9b292f09c32136b" exitCode=0 Jan 20 11:30:20 crc kubenswrapper[4725]: I0120 11:30:20.510939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"d02a02aca60254ad250ffe6b9525dda6f9b904e95118572ca9b292f09c32136b"} Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.829017 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954545 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954710 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954741 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954800 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954857 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954881 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954906 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954940 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.954967 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955029 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") pod \"851c53a0-c674-49b2-88dc-77da0a70406b\" (UID: \"851c53a0-c674-49b2-88dc-77da0a70406b\") " Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955619 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.955920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.956139 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.956712 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.956786 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.957736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.960792 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.973665 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j" (OuterVolumeSpecName: "kube-api-access-88l7j") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "kube-api-access-88l7j". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.974191 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:21 crc kubenswrapper[4725]: I0120 11:30:21.974299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057738 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057831 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057845 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057865 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057878 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-88l7j\" (UniqueName: \"kubernetes.io/projected/851c53a0-c674-49b2-88dc-77da0a70406b-kube-api-access-88l7j\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057891 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/851c53a0-c674-49b2-88dc-77da0a70406b-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057903 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/851c53a0-c674-49b2-88dc-77da0a70406b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057916 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057945 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/851c53a0-c674-49b2-88dc-77da0a70406b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.057958 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.087514 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.159499 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.529613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"851c53a0-c674-49b2-88dc-77da0a70406b","Type":"ContainerDied","Data":"6a9aff2c07fcb35085b065af0d4d52d91283e430a1d195a0198fb4e039bb9494"} Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.529680 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a9aff2c07fcb35085b065af0d4d52d91283e430a1d195a0198fb4e039bb9494" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.529799 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.901543 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "851c53a0-c674-49b2-88dc-77da0a70406b" (UID: "851c53a0-c674-49b2-88dc-77da0a70406b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:22 crc kubenswrapper[4725]: I0120 11:30:22.972653 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/851c53a0-c674-49b2-88dc-77da0a70406b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.584504 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585742 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="manage-dockerfile" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585761 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="manage-dockerfile" Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585791 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerName="collect-profiles" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585797 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerName="collect-profiles" Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585815 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="docker-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585824 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="docker-build" Jan 20 11:30:32 crc kubenswrapper[4725]: E0120 11:30:32.585831 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="git-clone" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585837 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="git-clone" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585949 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" containerName="collect-profiles" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.585968 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="851c53a0-c674-49b2-88dc-77da0a70406b" containerName="docker-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.586760 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.590952 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-ca" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.591957 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.593157 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-sys-config" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.597040 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-1-global-ca" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.606137 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659278 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659382 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659506 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659578 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659701 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659772 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659793 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659826 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659903 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659954 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.659986 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.760969 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761578 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761941 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762232 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762440 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762775 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.762849 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.761698 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.763379 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.763600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.763734 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764046 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764048 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764150 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764726 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764738 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.764720 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.770880 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.770908 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.784724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:32 crc kubenswrapper[4725]: I0120 11:30:32.909063 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:33 crc kubenswrapper[4725]: I0120 11:30:33.138106 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:33 crc kubenswrapper[4725]: I0120 11:30:33.625576 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerStarted","Data":"322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96"} Jan 20 11:30:33 crc kubenswrapper[4725]: I0120 11:30:33.626164 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerStarted","Data":"3b9caee28289884d1a8f320326ecf12177d8c3af9c0ce2a05fbdbe77cf7afbd5"} Jan 20 11:30:34 crc kubenswrapper[4725]: I0120 11:30:34.636942 4725 generic.go:334] "Generic (PLEG): container finished" podID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerID="322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96" exitCode=0 Jan 20 11:30:34 crc kubenswrapper[4725]: I0120 11:30:34.637020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerDied","Data":"322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96"} Jan 20 11:30:35 crc kubenswrapper[4725]: I0120 11:30:35.656391 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_f34838ec-7be3-417b-9394-8b6ebffb8dd9/docker-build/0.log" Jan 20 11:30:35 crc kubenswrapper[4725]: I0120 11:30:35.657488 4725 generic.go:334] "Generic (PLEG): container finished" podID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerID="78f02562103ddffde1093928ec6242b4c8b49a6f4ce128c626fad826fff2e675" exitCode=1 Jan 20 11:30:35 crc kubenswrapper[4725]: I0120 11:30:35.657560 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerDied","Data":"78f02562103ddffde1093928ec6242b4c8b49a6f4ce128c626fad826fff2e675"} Jan 20 11:30:36 crc kubenswrapper[4725]: I0120 11:30:36.942260 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_f34838ec-7be3-417b-9394-8b6ebffb8dd9/docker-build/0.log" Jan 20 11:30:36 crc kubenswrapper[4725]: I0120 11:30:36.944356 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033465 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033541 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033600 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033662 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033703 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033880 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.033949 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034019 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034430 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034798 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.034954 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.035053 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.035982 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.039709 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.044384 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.044452 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135167 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135273 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135327 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135354 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") pod \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\" (UID: \"f34838ec-7be3-417b-9394-8b6ebffb8dd9\") " Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135590 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135607 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135619 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135632 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/f34838ec-7be3-417b-9394-8b6ebffb8dd9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135642 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/f34838ec-7be3-417b-9394-8b6ebffb8dd9-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135653 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135698 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.135985 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.136487 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.136741 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.139508 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx" (OuterVolumeSpecName: "kube-api-access-4dpcx") pod "f34838ec-7be3-417b-9394-8b6ebffb8dd9" (UID: "f34838ec-7be3-417b-9394-8b6ebffb8dd9"). InnerVolumeSpecName "kube-api-access-4dpcx". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237469 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237517 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-4dpcx\" (UniqueName: \"kubernetes.io/projected/f34838ec-7be3-417b-9394-8b6ebffb8dd9-kube-api-access-4dpcx\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237533 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.237544 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/f34838ec-7be3-417b-9394-8b6ebffb8dd9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.674488 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_f34838ec-7be3-417b-9394-8b6ebffb8dd9/docker-build/0.log" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.675246 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"f34838ec-7be3-417b-9394-8b6ebffb8dd9","Type":"ContainerDied","Data":"3b9caee28289884d1a8f320326ecf12177d8c3af9c0ce2a05fbdbe77cf7afbd5"} Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.675294 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3b9caee28289884d1a8f320326ecf12177d8c3af9c0ce2a05fbdbe77cf7afbd5" Jan 20 11:30:37 crc kubenswrapper[4725]: I0120 11:30:37.675379 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 20 11:30:43 crc kubenswrapper[4725]: I0120 11:30:43.385856 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:43 crc kubenswrapper[4725]: I0120 11:30:43.391528 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 20 11:30:44 crc kubenswrapper[4725]: I0120 11:30:44.954273 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" path="/var/lib/kubelet/pods/f34838ec-7be3-417b-9394-8b6ebffb8dd9/volumes" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.014841 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 20 11:30:45 crc kubenswrapper[4725]: E0120 11:30:45.015197 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="manage-dockerfile" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.015218 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="manage-dockerfile" Jan 20 11:30:45 crc kubenswrapper[4725]: E0120 11:30:45.015234 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="docker-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.015242 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="docker-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.015380 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f34838ec-7be3-417b-9394-8b6ebffb8dd9" containerName="docker-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.017437 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.024604 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-sys-config" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.024805 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-global-ca" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.024941 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-operator-bundle-2-ca" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.025606 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.040961 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060106 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060326 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060360 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060377 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060490 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060809 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060851 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060896 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.060988 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.061018 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.061101 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.061143 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163273 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163335 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163381 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163416 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163447 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163468 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163495 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163512 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163551 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163573 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163600 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.163714 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164386 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164677 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164730 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.164798 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.165100 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.165151 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.165484 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.166266 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.170850 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.171528 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.192836 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.360459 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.593462 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 20 11:30:45 crc kubenswrapper[4725]: I0120 11:30:45.741509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerStarted","Data":"a3ed9de274f153291cfae19b23cc93d0467c036d25f39f703af1ea1d97e74a14"} Jan 20 11:30:46 crc kubenswrapper[4725]: I0120 11:30:46.754893 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerStarted","Data":"a228fc05dcf1d258035d853f2b9fb5a0b0fe393defb0dd4411a77e8b1fb737dd"} Jan 20 11:30:47 crc kubenswrapper[4725]: I0120 11:30:47.764451 4725 generic.go:334] "Generic (PLEG): container finished" podID="814e040b-c073-451b-80c4-2e90cb554a6b" containerID="a228fc05dcf1d258035d853f2b9fb5a0b0fe393defb0dd4411a77e8b1fb737dd" exitCode=0 Jan 20 11:30:47 crc kubenswrapper[4725]: I0120 11:30:47.764519 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"a228fc05dcf1d258035d853f2b9fb5a0b0fe393defb0dd4411a77e8b1fb737dd"} Jan 20 11:30:48 crc kubenswrapper[4725]: I0120 11:30:48.775450 4725 generic.go:334] "Generic (PLEG): container finished" podID="814e040b-c073-451b-80c4-2e90cb554a6b" containerID="43a4493455d38e0ab93389748f33cc58cabcde6d5c7b7b59319e8b0f3d4f3e9b" exitCode=0 Jan 20 11:30:48 crc kubenswrapper[4725]: I0120 11:30:48.775552 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"43a4493455d38e0ab93389748f33cc58cabcde6d5c7b7b59319e8b0f3d4f3e9b"} Jan 20 11:30:48 crc kubenswrapper[4725]: I0120 11:30:48.836949 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/manage-dockerfile/0.log" Jan 20 11:30:49 crc kubenswrapper[4725]: I0120 11:30:49.789644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerStarted","Data":"bbb59cdd24eccaccbdc033a2eaf566480990fb577b5ba529dc5d97b6a7bb547f"} Jan 20 11:30:49 crc kubenswrapper[4725]: I0120 11:30:49.823807 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=5.823776377 podStartE2EDuration="5.823776377s" podCreationTimestamp="2026-01-20 11:30:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:30:49.819731079 +0000 UTC m=+1578.028053082" watchObservedRunningTime="2026-01-20 11:30:49.823776377 +0000 UTC m=+1578.032098350" Jan 20 11:30:51 crc kubenswrapper[4725]: I0120 11:30:51.809238 4725 generic.go:334] "Generic (PLEG): container finished" podID="814e040b-c073-451b-80c4-2e90cb554a6b" containerID="bbb59cdd24eccaccbdc033a2eaf566480990fb577b5ba529dc5d97b6a7bb547f" exitCode=0 Jan 20 11:30:51 crc kubenswrapper[4725]: I0120 11:30:51.809343 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"bbb59cdd24eccaccbdc033a2eaf566480990fb577b5ba529dc5d97b6a7bb547f"} Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.092022 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292267 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292363 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292418 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292466 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292515 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292597 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292628 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292677 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292734 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292787 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.292809 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") pod \"814e040b-c073-451b-80c4-2e90cb554a6b\" (UID: \"814e040b-c073-451b-80c4-2e90cb554a6b\") " Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.293111 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.294178 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.294351 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.294314 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295290 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295381 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295746 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.295926 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.299856 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk" (OuterVolumeSpecName: "kube-api-access-hxhhk") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "kube-api-access-hxhhk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.300120 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.300283 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.301181 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "814e040b-c073-451b-80c4-2e90cb554a6b" (UID: "814e040b-c073-451b-80c4-2e90cb554a6b"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394834 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394889 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394906 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394924 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394935 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/814e040b-c073-451b-80c4-2e90cb554a6b-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394944 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hxhhk\" (UniqueName: \"kubernetes.io/projected/814e040b-c073-451b-80c4-2e90cb554a6b-kube-api-access-hxhhk\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394953 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394964 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394976 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394989 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/814e040b-c073-451b-80c4-2e90cb554a6b-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.394999 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/814e040b-c073-451b-80c4-2e90cb554a6b-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.395011 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/814e040b-c073-451b-80c4-2e90cb554a6b-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.829268 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"814e040b-c073-451b-80c4-2e90cb554a6b","Type":"ContainerDied","Data":"a3ed9de274f153291cfae19b23cc93d0467c036d25f39f703af1ea1d97e74a14"} Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.829351 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 20 11:30:53 crc kubenswrapper[4725]: I0120 11:30:53.829355 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a3ed9de274f153291cfae19b23cc93d0467c036d25f39f703af1ea1d97e74a14" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.576034 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:30:57 crc kubenswrapper[4725]: E0120 11:30:57.577211 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="docker-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577229 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="docker-build" Jan 20 11:30:57 crc kubenswrapper[4725]: E0120 11:30:57.577247 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="manage-dockerfile" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577254 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="manage-dockerfile" Jan 20 11:30:57 crc kubenswrapper[4725]: E0120 11:30:57.577267 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="git-clone" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577278 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="git-clone" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.577399 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="814e040b-c073-451b-80c4-2e90cb554a6b" containerName="docker-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.578262 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.581051 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-global-ca" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.581760 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.581960 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-ca" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.592674 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.599121 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-1-sys-config" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762193 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762277 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762326 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762349 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762496 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762575 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762742 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.762862 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763008 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763172 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763223 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.763258 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864625 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864725 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864753 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864777 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864807 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864847 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864875 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864899 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864924 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.864992 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.865611 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.865812 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.866574 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.866699 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.866683 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.867150 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.868044 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.868651 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.871513 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.874505 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.875712 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.890296 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:57 crc kubenswrapper[4725]: I0120 11:30:57.944145 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.213501 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.876990 4725 generic.go:334] "Generic (PLEG): container finished" podID="3730545e-db48-47ff-bbaf-1374485e0a68" containerID="cbb40b4a35af16ef739d7936989eb2a98cbe2e9f78178e91db6ddf8b1dfef24b" exitCode=0 Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.877056 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerDied","Data":"cbb40b4a35af16ef739d7936989eb2a98cbe2e9f78178e91db6ddf8b1dfef24b"} Jan 20 11:30:58 crc kubenswrapper[4725]: I0120 11:30:58.877515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerStarted","Data":"13c61c9b9dda1fd983408b19827c5cc397f84e67628de40558d79753ad990a7f"} Jan 20 11:30:59 crc kubenswrapper[4725]: I0120 11:30:59.887741 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_3730545e-db48-47ff-bbaf-1374485e0a68/docker-build/0.log" Jan 20 11:30:59 crc kubenswrapper[4725]: I0120 11:30:59.888716 4725 generic.go:334] "Generic (PLEG): container finished" podID="3730545e-db48-47ff-bbaf-1374485e0a68" containerID="697a37843b8a0440d43c4e8976463aac27a527f1025878803dd957ce26ac737d" exitCode=1 Jan 20 11:30:59 crc kubenswrapper[4725]: I0120 11:30:59.888762 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerDied","Data":"697a37843b8a0440d43c4e8976463aac27a527f1025878803dd957ce26ac737d"} Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.171514 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_3730545e-db48-47ff-bbaf-1374485e0a68/docker-build/0.log" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.172321 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326692 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326810 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326865 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326903 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.326963 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327059 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327113 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327137 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327202 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327234 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327295 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327345 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") pod \"3730545e-db48-47ff-bbaf-1374485e0a68\" (UID: \"3730545e-db48-47ff-bbaf-1374485e0a68\") " Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327818 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.327869 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.328789 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.328807 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.329241 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.329404 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.329683 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.330828 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.331532 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.336504 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.336536 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.336705 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f" (OuterVolumeSpecName: "kube-api-access-7ws2f") pod "3730545e-db48-47ff-bbaf-1374485e0a68" (UID: "3730545e-db48-47ff-bbaf-1374485e0a68"). InnerVolumeSpecName "kube-api-access-7ws2f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429369 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429435 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429451 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429462 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429474 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429488 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3730545e-db48-47ff-bbaf-1374485e0a68-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429499 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429509 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429521 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7ws2f\" (UniqueName: \"kubernetes.io/projected/3730545e-db48-47ff-bbaf-1374485e0a68-kube-api-access-7ws2f\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429531 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3730545e-db48-47ff-bbaf-1374485e0a68-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429539 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/3730545e-db48-47ff-bbaf-1374485e0a68-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.429551 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3730545e-db48-47ff-bbaf-1374485e0a68-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.906521 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_3730545e-db48-47ff-bbaf-1374485e0a68/docker-build/0.log" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.907019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"3730545e-db48-47ff-bbaf-1374485e0a68","Type":"ContainerDied","Data":"13c61c9b9dda1fd983408b19827c5cc397f84e67628de40558d79753ad990a7f"} Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.907066 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13c61c9b9dda1fd983408b19827c5cc397f84e67628de40558d79753ad990a7f" Jan 20 11:31:01 crc kubenswrapper[4725]: I0120 11:31:01.907118 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 20 11:31:08 crc kubenswrapper[4725]: I0120 11:31:08.529451 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:31:08 crc kubenswrapper[4725]: I0120 11:31:08.535291 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 20 11:31:08 crc kubenswrapper[4725]: I0120 11:31:08.944668 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" path="/var/lib/kubelet/pods/3730545e-db48-47ff-bbaf-1374485e0a68/volumes" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.155891 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 20 11:31:10 crc kubenswrapper[4725]: E0120 11:31:10.156860 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="manage-dockerfile" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.156884 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="manage-dockerfile" Jan 20 11:31:10 crc kubenswrapper[4725]: E0120 11:31:10.156900 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="docker-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.156908 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="docker-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.157043 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3730545e-db48-47ff-bbaf-1374485e0a68" containerName="docker-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.158301 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.161758 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-sys-config" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.162105 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-ca" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.162127 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.163137 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"smart-gateway-operator-bundle-2-global-ca" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.183149 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222329 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222397 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222470 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222523 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222612 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222687 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222765 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222860 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.222913 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.223050 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.223124 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.223176 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324668 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324696 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324744 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324770 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324802 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324830 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324850 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324887 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324875 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.324919 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.325148 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.325192 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.325951 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.326265 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.326397 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327317 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327627 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.327851 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.328131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.333572 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.333922 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.348902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.529319 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.816747 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 20 11:31:10 crc kubenswrapper[4725]: I0120 11:31:10.986842 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerStarted","Data":"1a2a08d778f3b582e358b59a79dc7afb885edaabd7deb1fae92e438cfc39d404"} Jan 20 11:31:11 crc kubenswrapper[4725]: I0120 11:31:11.998063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerStarted","Data":"095bb767bc7664f78d71c0ee7ec40ec2255564b01b456613aa71fd3e4aaa3bba"} Jan 20 11:31:13 crc kubenswrapper[4725]: I0120 11:31:13.006841 4725 generic.go:334] "Generic (PLEG): container finished" podID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerID="095bb767bc7664f78d71c0ee7ec40ec2255564b01b456613aa71fd3e4aaa3bba" exitCode=0 Jan 20 11:31:13 crc kubenswrapper[4725]: I0120 11:31:13.007415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"095bb767bc7664f78d71c0ee7ec40ec2255564b01b456613aa71fd3e4aaa3bba"} Jan 20 11:31:14 crc kubenswrapper[4725]: I0120 11:31:14.021478 4725 generic.go:334] "Generic (PLEG): container finished" podID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerID="3bde4ee52f0cffd609acce63c5f94debf2d5ab7ddc4ca8c67dfcc4b64f7f72be" exitCode=0 Jan 20 11:31:14 crc kubenswrapper[4725]: I0120 11:31:14.021602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"3bde4ee52f0cffd609acce63c5f94debf2d5ab7ddc4ca8c67dfcc4b64f7f72be"} Jan 20 11:31:14 crc kubenswrapper[4725]: I0120 11:31:14.058610 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/manage-dockerfile/0.log" Jan 20 11:31:15 crc kubenswrapper[4725]: I0120 11:31:15.033678 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerStarted","Data":"4315023da56f5a041d9648d5368227b026e76e7ede2ede61b477a2c92be02303"} Jan 20 11:31:15 crc kubenswrapper[4725]: I0120 11:31:15.067389 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=5.067359032 podStartE2EDuration="5.067359032s" podCreationTimestamp="2026-01-20 11:31:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:31:15.060302559 +0000 UTC m=+1603.268624542" watchObservedRunningTime="2026-01-20 11:31:15.067359032 +0000 UTC m=+1603.275681005" Jan 20 11:31:19 crc kubenswrapper[4725]: I0120 11:31:19.066811 4725 generic.go:334] "Generic (PLEG): container finished" podID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerID="4315023da56f5a041d9648d5368227b026e76e7ede2ede61b477a2c92be02303" exitCode=0 Jan 20 11:31:19 crc kubenswrapper[4725]: I0120 11:31:19.066890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"4315023da56f5a041d9648d5368227b026e76e7ede2ede61b477a2c92be02303"} Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.383116 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520355 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520446 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520492 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520526 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520548 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520590 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520630 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520623 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520666 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520843 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520876 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.520897 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") pod \"b276c041-188f-4dd1-a7b4-0d0ba6531174\" (UID: \"b276c041-188f-4dd1-a7b4-0d0ba6531174\") " Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521507 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521550 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521597 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521666 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.521768 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.522217 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.522372 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.522549 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.531271 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.537423 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh" (OuterVolumeSpecName: "kube-api-access-v4snh") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "kube-api-access-v4snh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.540248 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.551948 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b276c041-188f-4dd1-a7b4-0d0ba6531174" (UID: "b276c041-188f-4dd1-a7b4-0d0ba6531174"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.622907 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.622965 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.622990 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623005 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623019 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623036 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/b276c041-188f-4dd1-a7b4-0d0ba6531174-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623050 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623060 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-v4snh\" (UniqueName: \"kubernetes.io/projected/b276c041-188f-4dd1-a7b4-0d0ba6531174-kube-api-access-v4snh\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623069 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623108 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b276c041-188f-4dd1-a7b4-0d0ba6531174-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:20 crc kubenswrapper[4725]: I0120 11:31:20.623125 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b276c041-188f-4dd1-a7b4-0d0ba6531174-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:31:21 crc kubenswrapper[4725]: I0120 11:31:21.088019 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"b276c041-188f-4dd1-a7b4-0d0ba6531174","Type":"ContainerDied","Data":"1a2a08d778f3b582e358b59a79dc7afb885edaabd7deb1fae92e438cfc39d404"} Jan 20 11:31:21 crc kubenswrapper[4725]: I0120 11:31:21.088090 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1a2a08d778f3b582e358b59a79dc7afb885edaabd7deb1fae92e438cfc39d404" Jan 20 11:31:21 crc kubenswrapper[4725]: I0120 11:31:21.088130 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 20 11:31:26 crc kubenswrapper[4725]: I0120 11:31:26.728638 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:31:26 crc kubenswrapper[4725]: I0120 11:31:26.729599 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.765145 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 11:31:38 crc kubenswrapper[4725]: E0120 11:31:38.766054 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="manage-dockerfile" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766069 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="manage-dockerfile" Jan 20 11:31:38 crc kubenswrapper[4725]: E0120 11:31:38.766113 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="git-clone" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766119 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="git-clone" Jan 20 11:31:38 crc kubenswrapper[4725]: E0120 11:31:38.766128 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="docker-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766136 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="docker-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.766256 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b276c041-188f-4dd1-a7b4-0d0ba6531174" containerName="docker-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.767238 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770242 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-global-ca" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770272 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-framework-index-dockercfg" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770367 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-ca" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.770425 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"builder-dockercfg-ns4k2" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.771144 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"service-telemetry-framework-index-1-sys-config" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.795819 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.909022 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910116 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910249 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910386 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910538 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910656 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910711 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.910964 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.911028 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.911053 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:38 crc kubenswrapper[4725]: I0120 11:31:38.911191 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012278 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012368 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012396 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012459 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012497 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012531 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012567 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012621 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012654 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.012727 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013282 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013314 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013577 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013641 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013882 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.013941 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.014235 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.014479 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.014531 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.020305 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.026635 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.026850 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.033715 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"service-telemetry-framework-index-1-build\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.116779 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.377414 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 20 11:31:39 crc kubenswrapper[4725]: I0120 11:31:39.402119 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerStarted","Data":"83f0df430461004a77dbc3f3c45e3d15c682f81a8ac4872c355830d1bd8280b0"} Jan 20 11:31:40 crc kubenswrapper[4725]: I0120 11:31:40.412276 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerStarted","Data":"980011b6035083b7c5ff3e0b221d1f3e58b3e76fa827f1157d70d0d0c290c65a"} Jan 20 11:31:41 crc kubenswrapper[4725]: I0120 11:31:41.424691 4725 generic.go:334] "Generic (PLEG): container finished" podID="184194a7-f32c-4db2-a055-5a776484cda8" containerID="980011b6035083b7c5ff3e0b221d1f3e58b3e76fa827f1157d70d0d0c290c65a" exitCode=0 Jan 20 11:31:41 crc kubenswrapper[4725]: I0120 11:31:41.424787 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"980011b6035083b7c5ff3e0b221d1f3e58b3e76fa827f1157d70d0d0c290c65a"} Jan 20 11:31:42 crc kubenswrapper[4725]: I0120 11:31:42.436031 4725 generic.go:334] "Generic (PLEG): container finished" podID="184194a7-f32c-4db2-a055-5a776484cda8" containerID="546ec06121171d7d920d2290c0da83826529c5af64051bec728234cb8055fc0d" exitCode=0 Jan 20 11:31:42 crc kubenswrapper[4725]: I0120 11:31:42.436251 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"546ec06121171d7d920d2290c0da83826529c5af64051bec728234cb8055fc0d"} Jan 20 11:31:42 crc kubenswrapper[4725]: I0120 11:31:42.489664 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/manage-dockerfile/0.log" Jan 20 11:31:43 crc kubenswrapper[4725]: I0120 11:31:43.448580 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerStarted","Data":"4f404a7a9bd4e81eb4ce25e9968cd444dc303fa9ed15549c4f192754e01659a9"} Jan 20 11:31:43 crc kubenswrapper[4725]: I0120 11:31:43.481629 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.481598534 podStartE2EDuration="5.481598534s" podCreationTimestamp="2026-01-20 11:31:38 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:31:43.478669972 +0000 UTC m=+1631.686991955" watchObservedRunningTime="2026-01-20 11:31:43.481598534 +0000 UTC m=+1631.689920507" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.156338 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.160890 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.172233 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.376940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.377389 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.377442 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.478945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.479018 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.479052 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.479815 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.480167 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.532040 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"community-operators-zw8vk\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:48 crc kubenswrapper[4725]: I0120 11:31:48.793725 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:31:49 crc kubenswrapper[4725]: I0120 11:31:49.326089 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:31:49 crc kubenswrapper[4725]: I0120 11:31:49.502594 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerStarted","Data":"70646bf0be00d4827288d1767e686373f5be91d20ff1f158f20cf715c5460fba"} Jan 20 11:31:51 crc kubenswrapper[4725]: I0120 11:31:51.519017 4725 generic.go:334] "Generic (PLEG): container finished" podID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" exitCode=0 Jan 20 11:31:51 crc kubenswrapper[4725]: I0120 11:31:51.519577 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906"} Jan 20 11:31:53 crc kubenswrapper[4725]: I0120 11:31:53.539961 4725 generic.go:334] "Generic (PLEG): container finished" podID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" exitCode=0 Jan 20 11:31:53 crc kubenswrapper[4725]: I0120 11:31:53.540113 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf"} Jan 20 11:31:56 crc kubenswrapper[4725]: I0120 11:31:56.727852 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:31:56 crc kubenswrapper[4725]: I0120 11:31:56.728491 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:32:01 crc kubenswrapper[4725]: I0120 11:32:01.619238 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerStarted","Data":"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747"} Jan 20 11:32:01 crc kubenswrapper[4725]: I0120 11:32:01.648666 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-zw8vk" podStartSLOduration=5.135286147 podStartE2EDuration="13.648641234s" podCreationTimestamp="2026-01-20 11:31:48 +0000 UTC" firstStartedPulling="2026-01-20 11:31:51.521379729 +0000 UTC m=+1639.729701712" lastFinishedPulling="2026-01-20 11:32:00.034734826 +0000 UTC m=+1648.243056799" observedRunningTime="2026-01-20 11:32:01.644659619 +0000 UTC m=+1649.852981592" watchObservedRunningTime="2026-01-20 11:32:01.648641234 +0000 UTC m=+1649.856963207" Jan 20 11:32:08 crc kubenswrapper[4725]: I0120 11:32:08.794895 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:08 crc kubenswrapper[4725]: I0120 11:32:08.795963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:08 crc kubenswrapper[4725]: I0120 11:32:08.846342 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:09 crc kubenswrapper[4725]: I0120 11:32:09.731567 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:09 crc kubenswrapper[4725]: I0120 11:32:09.790853 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:32:11 crc kubenswrapper[4725]: I0120 11:32:11.704023 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-zw8vk" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" containerID="cri-o://2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" gracePeriod=2 Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.370888 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.516824 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") pod \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.516912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") pod \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.517064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") pod \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\" (UID: \"b19adb35-c4b0-4602-bb43-78f6e8b51b70\") " Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.518216 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities" (OuterVolumeSpecName: "utilities") pod "b19adb35-c4b0-4602-bb43-78f6e8b51b70" (UID: "b19adb35-c4b0-4602-bb43-78f6e8b51b70"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.524907 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s" (OuterVolumeSpecName: "kube-api-access-8fv7s") pod "b19adb35-c4b0-4602-bb43-78f6e8b51b70" (UID: "b19adb35-c4b0-4602-bb43-78f6e8b51b70"). InnerVolumeSpecName "kube-api-access-8fv7s". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.584051 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b19adb35-c4b0-4602-bb43-78f6e8b51b70" (UID: "b19adb35-c4b0-4602-bb43-78f6e8b51b70"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.619171 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.619218 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8fv7s\" (UniqueName: \"kubernetes.io/projected/b19adb35-c4b0-4602-bb43-78f6e8b51b70-kube-api-access-8fv7s\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.619230 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b19adb35-c4b0-4602-bb43-78f6e8b51b70-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716609 4725 generic.go:334] "Generic (PLEG): container finished" podID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" exitCode=0 Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716731 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-zw8vk" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716718 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747"} Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716883 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-zw8vk" event={"ID":"b19adb35-c4b0-4602-bb43-78f6e8b51b70","Type":"ContainerDied","Data":"70646bf0be00d4827288d1767e686373f5be91d20ff1f158f20cf715c5460fba"} Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.716914 4725 scope.go:117] "RemoveContainer" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.758285 4725 scope.go:117] "RemoveContainer" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.765099 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.791749 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-zw8vk"] Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.801201 4725 scope.go:117] "RemoveContainer" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.824925 4725 scope.go:117] "RemoveContainer" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" Jan 20 11:32:12 crc kubenswrapper[4725]: E0120 11:32:12.825594 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747\": container with ID starting with 2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747 not found: ID does not exist" containerID="2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.825663 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747"} err="failed to get container status \"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747\": rpc error: code = NotFound desc = could not find container \"2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747\": container with ID starting with 2f8f5a396c879473800e0a26812018c0651ce11b282271ee9bad74bb14b6f747 not found: ID does not exist" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.825690 4725 scope.go:117] "RemoveContainer" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" Jan 20 11:32:12 crc kubenswrapper[4725]: E0120 11:32:12.826146 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf\": container with ID starting with 9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf not found: ID does not exist" containerID="9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.826181 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf"} err="failed to get container status \"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf\": rpc error: code = NotFound desc = could not find container \"9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf\": container with ID starting with 9b56e90a3cd390f6d7c4ad431157ac0200a91ccbea36a6edd7721d4139974dcf not found: ID does not exist" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.826200 4725 scope.go:117] "RemoveContainer" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" Jan 20 11:32:12 crc kubenswrapper[4725]: E0120 11:32:12.826760 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906\": container with ID starting with 362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906 not found: ID does not exist" containerID="362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.826844 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906"} err="failed to get container status \"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906\": rpc error: code = NotFound desc = could not find container \"362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906\": container with ID starting with 362be9e3f35e2fb137b055aee99e399aa2b27e5f117a9c7943cdb1d994f0e906 not found: ID does not exist" Jan 20 11:32:12 crc kubenswrapper[4725]: I0120 11:32:12.944606 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" path="/var/lib/kubelet/pods/b19adb35-c4b0-4602-bb43-78f6e8b51b70/volumes" Jan 20 11:32:16 crc kubenswrapper[4725]: I0120 11:32:16.755315 4725 generic.go:334] "Generic (PLEG): container finished" podID="184194a7-f32c-4db2-a055-5a776484cda8" containerID="4f404a7a9bd4e81eb4ce25e9968cd444dc303fa9ed15549c4f192754e01659a9" exitCode=0 Jan 20 11:32:16 crc kubenswrapper[4725]: I0120 11:32:16.755377 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"4f404a7a9bd4e81eb4ce25e9968cd444dc303fa9ed15549c4f192754e01659a9"} Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.101596 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211423 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211526 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211569 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211639 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211693 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211715 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211708 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211740 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211776 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211751 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211823 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211856 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211878 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211938 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.211963 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") pod \"184194a7-f32c-4db2-a055-5a776484cda8\" (UID: \"184194a7-f32c-4db2-a055-5a776484cda8\") " Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.212311 4725 reconciler_common.go:293] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.212324 4725 reconciler_common.go:293] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/184194a7-f32c-4db2-a055-5a776484cda8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213113 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213267 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213627 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.213655 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.215230 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.221587 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-push") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "builder-dockercfg-ns4k2-push". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.221682 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull" (OuterVolumeSpecName: "builder-dockercfg-ns4k2-pull") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "builder-dockercfg-ns4k2-pull". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.221929 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk" (OuterVolumeSpecName: "kube-api-access-fdbxk") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "kube-api-access-fdbxk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.225307 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313593 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-pull\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-pull\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313883 4725 reconciler_common.go:293] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313893 4725 reconciler_common.go:293] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313906 4725 reconciler_common.go:293] "Volume detached for volume \"builder-dockercfg-ns4k2-push\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-builder-dockercfg-ns4k2-push\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313916 4725 reconciler_common.go:293] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/184194a7-f32c-4db2-a055-5a776484cda8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313928 4725 reconciler_common.go:293] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313938 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-fdbxk\" (UniqueName: \"kubernetes.io/projected/184194a7-f32c-4db2-a055-5a776484cda8-kube-api-access-fdbxk\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313946 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.313955 4725 reconciler_common.go:293] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/184194a7-f32c-4db2-a055-5a776484cda8-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.494405 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.516213 4725 reconciler_common.go:293] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.777208 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"184194a7-f32c-4db2-a055-5a776484cda8","Type":"ContainerDied","Data":"83f0df430461004a77dbc3f3c45e3d15c682f81a8ac4872c355830d1bd8280b0"} Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.777277 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="83f0df430461004a77dbc3f3c45e3d15c682f81a8ac4872c355830d1bd8280b0" Jan 20 11:32:18 crc kubenswrapper[4725]: I0120 11:32:18.777478 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.549467 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550400 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="git-clone" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550419 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="git-clone" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550436 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-utilities" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550443 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-utilities" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550454 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="docker-build" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550461 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="docker-build" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550473 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550482 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550492 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="manage-dockerfile" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550499 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="manage-dockerfile" Jan 20 11:32:20 crc kubenswrapper[4725]: E0120 11:32:20.550508 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-content" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550517 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="extract-content" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550688 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b19adb35-c4b0-4602-bb43-78f6e8b51b70" containerName="registry-server" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.550701 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="184194a7-f32c-4db2-a055-5a776484cda8" containerName="docker-build" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.551328 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.554303 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"infrawatch-operators-dockercfg-6qtgx" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.567906 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.651564 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"infrawatch-operators-tppzp\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.752940 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"infrawatch-operators-tppzp\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.776387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"infrawatch-operators-tppzp\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:20 crc kubenswrapper[4725]: I0120 11:32:20.907356 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.190137 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.267784 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "184194a7-f32c-4db2-a055-5a776484cda8" (UID: "184194a7-f32c-4db2-a055-5a776484cda8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.364202 4725 reconciler_common.go:293] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/184194a7-f32c-4db2-a055-5a776484cda8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:21 crc kubenswrapper[4725]: I0120 11:32:21.817332 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tppzp" event={"ID":"d34ba0e4-6450-40c0-b870-fa39d91f4340","Type":"ContainerStarted","Data":"ef1c9d46b2251485916f2411ceef68848442ee457d2afecc7f3db523f6fb286a"} Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.137614 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.350573 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-4fmg5"] Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.351728 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.363559 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4fmg5"] Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.501596 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q7c6r\" (UniqueName: \"kubernetes.io/projected/514d6114-a2ee-4a88-9798-9a27066ed03a-kube-api-access-q7c6r\") pod \"infrawatch-operators-4fmg5\" (UID: \"514d6114-a2ee-4a88-9798-9a27066ed03a\") " pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.603232 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-q7c6r\" (UniqueName: \"kubernetes.io/projected/514d6114-a2ee-4a88-9798-9a27066ed03a-kube-api-access-q7c6r\") pod \"infrawatch-operators-4fmg5\" (UID: \"514d6114-a2ee-4a88-9798-9a27066ed03a\") " pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.638213 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-q7c6r\" (UniqueName: \"kubernetes.io/projected/514d6114-a2ee-4a88-9798-9a27066ed03a-kube-api-access-q7c6r\") pod \"infrawatch-operators-4fmg5\" (UID: \"514d6114-a2ee-4a88-9798-9a27066ed03a\") " pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.694170 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:23 crc kubenswrapper[4725]: I0120 11:32:23.991779 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-4fmg5"] Jan 20 11:32:24 crc kubenswrapper[4725]: W0120 11:32:24.007570 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod514d6114_a2ee_4a88_9798_9a27066ed03a.slice/crio-fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479 WatchSource:0}: Error finding container fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479: Status 404 returned error can't find the container with id fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479 Jan 20 11:32:24 crc kubenswrapper[4725]: I0120 11:32:24.849817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4fmg5" event={"ID":"514d6114-a2ee-4a88-9798-9a27066ed03a","Type":"ContainerStarted","Data":"fe715624a1b335afc1911d9561ba06ff2238276490377db3e400e95acdb17479"} Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.727945 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.728052 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.728170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.728992 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:32:26 crc kubenswrapper[4725]: I0120 11:32:26.729065 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" gracePeriod=600 Jan 20 11:32:27 crc kubenswrapper[4725]: I0120 11:32:27.900771 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" exitCode=0 Jan 20 11:32:27 crc kubenswrapper[4725]: I0120 11:32:27.900845 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f"} Jan 20 11:32:27 crc kubenswrapper[4725]: I0120 11:32:27.901263 4725 scope.go:117] "RemoveContainer" containerID="f4fc1fcf338ebfe5af6d4787c5c22eee2304f68710ca0a31b34dc23628c963f3" Jan 20 11:32:28 crc kubenswrapper[4725]: E0120 11:32:28.108872 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:32:28 crc kubenswrapper[4725]: I0120 11:32:28.911040 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:32:28 crc kubenswrapper[4725]: E0120 11:32:28.911413 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.809855 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.811341 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-7rq7f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-tppzp_service-telemetry(d34ba0e4-6450-40c0-b870-fa39d91f4340): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.812881 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/infrawatch-operators-tppzp" podUID="d34ba0e4-6450-40c0-b870-fa39d91f4340" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.826662 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.826884 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:registry-server,Image:image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:grpc,HostPort:0,ContainerPort:50051,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{10 -3} {} 10m DecimalSI},memory: {{52428800 0} {} 50Mi BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-q7c6r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:5,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[MKNOD],},Privileged:nil,SELinuxOptions:nil,RunAsUser:nil,RunAsNonRoot:nil,ReadOnlyRootFilesystem:*false,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[grpc_health_probe -addr=:50051],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:10,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod infrawatch-operators-4fmg5_service-telemetry(514d6114-a2ee-4a88-9798-9a27066ed03a): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.828305 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/infrawatch-operators-4fmg5" podUID="514d6114-a2ee-4a88-9798-9a27066ed03a" Jan 20 11:32:38 crc kubenswrapper[4725]: E0120 11:32:38.992333 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"registry-server\" with ImagePullBackOff: \"Back-off pulling image \\\"image-registry.openshift-image-registry.svc:5000/service-telemetry/service-telemetry-framework-index:latest\\\"\"" pod="service-telemetry/infrawatch-operators-4fmg5" podUID="514d6114-a2ee-4a88-9798-9a27066ed03a" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.315272 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.389135 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") pod \"d34ba0e4-6450-40c0-b870-fa39d91f4340\" (UID: \"d34ba0e4-6450-40c0-b870-fa39d91f4340\") " Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.395736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f" (OuterVolumeSpecName: "kube-api-access-7rq7f") pod "d34ba0e4-6450-40c0-b870-fa39d91f4340" (UID: "d34ba0e4-6450-40c0-b870-fa39d91f4340"). InnerVolumeSpecName "kube-api-access-7rq7f". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.490972 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-7rq7f\" (UniqueName: \"kubernetes.io/projected/d34ba0e4-6450-40c0-b870-fa39d91f4340-kube-api-access-7rq7f\") on node \"crc\" DevicePath \"\"" Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.997959 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-tppzp" event={"ID":"d34ba0e4-6450-40c0-b870-fa39d91f4340","Type":"ContainerDied","Data":"ef1c9d46b2251485916f2411ceef68848442ee457d2afecc7f3db523f6fb286a"} Jan 20 11:32:39 crc kubenswrapper[4725]: I0120 11:32:39.997986 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-tppzp" Jan 20 11:32:40 crc kubenswrapper[4725]: I0120 11:32:40.053999 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:40 crc kubenswrapper[4725]: I0120 11:32:40.067745 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-tppzp"] Jan 20 11:32:40 crc kubenswrapper[4725]: I0120 11:32:40.943653 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d34ba0e4-6450-40c0-b870-fa39d91f4340" path="/var/lib/kubelet/pods/d34ba0e4-6450-40c0-b870-fa39d91f4340/volumes" Jan 20 11:32:42 crc kubenswrapper[4725]: I0120 11:32:42.936815 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:32:42 crc kubenswrapper[4725]: E0120 11:32:42.938172 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:32:51 crc kubenswrapper[4725]: I0120 11:32:51.093371 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-4fmg5" event={"ID":"514d6114-a2ee-4a88-9798-9a27066ed03a","Type":"ContainerStarted","Data":"e9f4f503b82d1497799639260d7a78206c2b6d7e71cc786895f674c1e78eecfc"} Jan 20 11:32:51 crc kubenswrapper[4725]: I0120 11:32:51.120675 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-4fmg5" podStartSLOduration=1.485649316 podStartE2EDuration="28.120652041s" podCreationTimestamp="2026-01-20 11:32:23 +0000 UTC" firstStartedPulling="2026-01-20 11:32:24.013431428 +0000 UTC m=+1672.221753411" lastFinishedPulling="2026-01-20 11:32:50.648434163 +0000 UTC m=+1698.856756136" observedRunningTime="2026-01-20 11:32:51.115833689 +0000 UTC m=+1699.324155662" watchObservedRunningTime="2026-01-20 11:32:51.120652041 +0000 UTC m=+1699.328974014" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.695696 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.696289 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.842030 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:32:53 crc kubenswrapper[4725]: I0120 11:32:53.934214 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:32:53 crc kubenswrapper[4725]: E0120 11:32:53.934515 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:03 crc kubenswrapper[4725]: I0120 11:33:03.731339 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-4fmg5" Jan 20 11:33:05 crc kubenswrapper[4725]: I0120 11:33:05.932590 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:05 crc kubenswrapper[4725]: E0120 11:33:05.932993 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.229660 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4"] Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.234720 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.248433 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4"] Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.257338 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.257655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.257715 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.358979 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359046 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359094 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359629 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.359741 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.379724 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.566898 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:09 crc kubenswrapper[4725]: I0120 11:33:09.817422 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4"] Jan 20 11:33:09 crc kubenswrapper[4725]: W0120 11:33:09.826477 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6c49be43_a86b_4475_8bd3_a1105dd19ad1.slice/crio-3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad WatchSource:0}: Error finding container 3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad: Status 404 returned error can't find the container with id 3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.036471 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75"] Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.038900 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.061791 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75"] Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.175320 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.175433 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.175761 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.245526 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerStarted","Data":"3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad"} Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277050 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277212 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277265 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.277989 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.278027 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.300269 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.355114 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:10 crc kubenswrapper[4725]: I0120 11:33:10.818417 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75"] Jan 20 11:33:10 crc kubenswrapper[4725]: W0120 11:33:10.830915 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod34d9f6e3_822c_4b9e_a9f1_4f5fa7a8ce83.slice/crio-cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d WatchSource:0}: Error finding container cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d: Status 404 returned error can't find the container with id cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d Jan 20 11:33:11 crc kubenswrapper[4725]: I0120 11:33:11.255299 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerStarted","Data":"cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d"} Jan 20 11:33:14 crc kubenswrapper[4725]: I0120 11:33:14.297511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerStarted","Data":"03cefba0f36e88b3436a6505be4355c483f681b8f10929f9dd65ac558dced7f7"} Jan 20 11:33:14 crc kubenswrapper[4725]: I0120 11:33:14.300654 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerStarted","Data":"44fd2d6a4962c66c239a0537bbedf0e1ea0e729472ffa414c4837765f7b23dda"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.313649 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerID="44fd2d6a4962c66c239a0537bbedf0e1ea0e729472ffa414c4837765f7b23dda" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.314034 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerID="5322f861c1a71f5da86bab990e805725991caaa0a88d6b181fd2a9c80b08ef00" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.313810 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"44fd2d6a4962c66c239a0537bbedf0e1ea0e729472ffa414c4837765f7b23dda"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.314117 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"5322f861c1a71f5da86bab990e805725991caaa0a88d6b181fd2a9c80b08ef00"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317663 4725 generic.go:334] "Generic (PLEG): container finished" podID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerID="03cefba0f36e88b3436a6505be4355c483f681b8f10929f9dd65ac558dced7f7" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317754 4725 generic.go:334] "Generic (PLEG): container finished" podID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerID="a7be2a6ad50c3f3a2562db87b8b10abe4e0c90fa599df8cdfadb6f48b6848f33" exitCode=0 Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317721 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"03cefba0f36e88b3436a6505be4355c483f681b8f10929f9dd65ac558dced7f7"} Jan 20 11:33:15 crc kubenswrapper[4725]: I0120 11:33:15.317841 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"a7be2a6ad50c3f3a2562db87b8b10abe4e0c90fa599df8cdfadb6f48b6848f33"} Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.331127 4725 generic.go:334] "Generic (PLEG): container finished" podID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerID="e23dca9b6f1fe94344a1bca068cb46f94d215c5d2bdc4f4696f3bd64a221d6d7" exitCode=0 Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.331296 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"e23dca9b6f1fe94344a1bca068cb46f94d215c5d2bdc4f4696f3bd64a221d6d7"} Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.336811 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerID="ef2296c14bddc126931440ba3bf049299e7f9ff33e4cd0358862a289b7825f7c" exitCode=0 Jan 20 11:33:16 crc kubenswrapper[4725]: I0120 11:33:16.336877 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"ef2296c14bddc126931440ba3bf049299e7f9ff33e4cd0358862a289b7825f7c"} Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.613011 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.623942 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.640691 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") pod \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.640755 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") pod \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.641932 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle" (OuterVolumeSpecName: "bundle") pod "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" (UID: "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.640837 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") pod \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.642800 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") pod \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.642870 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") pod \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\" (UID: \"6c49be43-a86b-4475-8bd3-a1105dd19ad1\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.642942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") pod \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\" (UID: \"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83\") " Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.643990 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.644135 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle" (OuterVolumeSpecName: "bundle") pod "6c49be43-a86b-4475-8bd3-a1105dd19ad1" (UID: "6c49be43-a86b-4475-8bd3-a1105dd19ad1"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.648183 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz" (OuterVolumeSpecName: "kube-api-access-8splz") pod "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" (UID: "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83"). InnerVolumeSpecName "kube-api-access-8splz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.648233 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts" (OuterVolumeSpecName: "kube-api-access-zz9ts") pod "6c49be43-a86b-4475-8bd3-a1105dd19ad1" (UID: "6c49be43-a86b-4475-8bd3-a1105dd19ad1"). InnerVolumeSpecName "kube-api-access-zz9ts". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.666767 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util" (OuterVolumeSpecName: "util") pod "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" (UID: "34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.668913 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util" (OuterVolumeSpecName: "util") pod "6c49be43-a86b-4475-8bd3-a1105dd19ad1" (UID: "6c49be43-a86b-4475-8bd3-a1105dd19ad1"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746510 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8splz\" (UniqueName: \"kubernetes.io/projected/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-kube-api-access-8splz\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746556 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zz9ts\" (UniqueName: \"kubernetes.io/projected/6c49be43-a86b-4475-8bd3-a1105dd19ad1-kube-api-access-zz9ts\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746581 4725 reconciler_common.go:293] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-bundle\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746593 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/6c49be43-a86b-4475-8bd3-a1105dd19ad1-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.746606 4725 reconciler_common.go:293] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83-util\") on node \"crc\" DevicePath \"\"" Jan 20 11:33:17 crc kubenswrapper[4725]: I0120 11:33:17.933018 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:17 crc kubenswrapper[4725]: E0120 11:33:17.933341 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.368363 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" event={"ID":"6c49be43-a86b-4475-8bd3-a1105dd19ad1","Type":"ContainerDied","Data":"3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad"} Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.368431 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3a159b0bed9e673f78d7daa7f265c5e7cfe4c8f310c11725483636be3ba159ad" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.368607 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.372096 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" event={"ID":"34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83","Type":"ContainerDied","Data":"cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d"} Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.372163 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cab26bfe90f1bf0e6620e6cf3d789ce9e628ecbac58f28acdd489bed3c3e843d" Jan 20 11:33:18 crc kubenswrapper[4725]: I0120 11:33:18.372319 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.402847 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk"] Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403652 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403669 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403686 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403692 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="util" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403708 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403714 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403727 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403734 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403749 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403756 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: E0120 11:33:21.403767 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403776 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="pull" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403895 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.403919 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c49be43-a86b-4475-8bd3-a1105dd19ad1" containerName="extract" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.404478 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.409428 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-operator-dockercfg-btv9g" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.420998 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk"] Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.507686 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jhxdj\" (UniqueName: \"kubernetes.io/projected/288c5de6-7288-478c-b790-1f348c4827f4-kube-api-access-jhxdj\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.507802 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/288c5de6-7288-478c-b790-1f348c4827f4-runner\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.608867 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/288c5de6-7288-478c-b790-1f348c4827f4-runner\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.608965 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-jhxdj\" (UniqueName: \"kubernetes.io/projected/288c5de6-7288-478c-b790-1f348c4827f4-kube-api-access-jhxdj\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.609554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/288c5de6-7288-478c-b790-1f348c4827f4-runner\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.631999 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-jhxdj\" (UniqueName: \"kubernetes.io/projected/288c5de6-7288-478c-b790-1f348c4827f4-kube-api-access-jhxdj\") pod \"smart-gateway-operator-86d4f8cb59-xtrqk\" (UID: \"288c5de6-7288-478c-b790-1f348c4827f4\") " pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:21 crc kubenswrapper[4725]: I0120 11:33:21.725598 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" Jan 20 11:33:22 crc kubenswrapper[4725]: I0120 11:33:22.015377 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk"] Jan 20 11:33:22 crc kubenswrapper[4725]: I0120 11:33:22.028809 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:33:22 crc kubenswrapper[4725]: I0120 11:33:22.406590 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" event={"ID":"288c5de6-7288-478c-b790-1f348c4827f4","Type":"ContainerStarted","Data":"559de67ea891095e76457a3bd24bcea7059b730dcd106995695931a130a8cb47"} Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.456223 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-9d4584887-5t9dx"] Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.457910 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.462693 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"service-telemetry-operator-dockercfg-trjzb" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.479828 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-9d4584887-5t9dx"] Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.562799 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4zf6k\" (UniqueName: \"kubernetes.io/projected/653691a1-9088-47bd-97e2-4d2f17f885bf-kube-api-access-4zf6k\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.563184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/653691a1-9088-47bd-97e2-4d2f17f885bf-runner\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.664434 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-4zf6k\" (UniqueName: \"kubernetes.io/projected/653691a1-9088-47bd-97e2-4d2f17f885bf-kube-api-access-4zf6k\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.664599 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/653691a1-9088-47bd-97e2-4d2f17f885bf-runner\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.665346 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/653691a1-9088-47bd-97e2-4d2f17f885bf-runner\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.690258 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-4zf6k\" (UniqueName: \"kubernetes.io/projected/653691a1-9088-47bd-97e2-4d2f17f885bf-kube-api-access-4zf6k\") pod \"service-telemetry-operator-9d4584887-5t9dx\" (UID: \"653691a1-9088-47bd-97e2-4d2f17f885bf\") " pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:24 crc kubenswrapper[4725]: I0120 11:33:24.788245 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" Jan 20 11:33:25 crc kubenswrapper[4725]: I0120 11:33:25.108961 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-9d4584887-5t9dx"] Jan 20 11:33:25 crc kubenswrapper[4725]: I0120 11:33:25.458171 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" event={"ID":"653691a1-9088-47bd-97e2-4d2f17f885bf","Type":"ContainerStarted","Data":"8bd33e42645799c3eb6694bb46c468ee8d85e8dea1f736fd1ef922b58597829e"} Jan 20 11:33:29 crc kubenswrapper[4725]: I0120 11:33:29.932701 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:29 crc kubenswrapper[4725]: E0120 11:33:29.933559 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.085664 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying config: context canceled" image="quay.io/infrawatch/smart-gateway-operator:latest" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.086818 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:operator,Image:quay.io/infrawatch/smart-gateway-operator:latest,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:WATCH_NAMESPACE,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.annotations['olm.targetNamespaces'],},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:POD_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:metadata.name,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},EnvVar{Name:OPERATOR_NAME,Value:smart-gateway-operator,ValueFrom:nil,},EnvVar{Name:ANSIBLE_GATHERING,Value:explicit,ValueFrom:nil,},EnvVar{Name:ANSIBLE_VERBOSITY_SMARTGATEWAY_SMARTGATEWAY_INFRA_WATCH,Value:4,ValueFrom:nil,},EnvVar{Name:ANSIBLE_DEBUG_LOGS,Value:true,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_CORE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-core:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_BRIDGE_SMARTGATEWAY_IMAGE,Value:image-registry.openshift-image-registry.svc:5000/service-telemetry/sg-bridge:latest,ValueFrom:nil,},EnvVar{Name:RELATED_IMAGE_OAUTH_PROXY_IMAGE,Value:quay.io/openshift/origin-oauth-proxy:latest,ValueFrom:nil,},EnvVar{Name:OPERATOR_CONDITION_NAME,Value:smart-gateway-operator.v5.0.1768908623,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:runner,ReadOnly:false,MountPath:/tmp/ansible-operator/runner,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-jhxdj,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod smart-gateway-operator-86d4f8cb59-xtrqk_service-telemetry(288c5de6-7288-478c-b790-1f348c4827f4): ErrImagePull: rpc error: code = Canceled desc = copying config: context canceled" logger="UnhandledError" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.087941 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ErrImagePull: \"rpc error: code = Canceled desc = copying config: context canceled\"" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" podUID="288c5de6-7288-478c-b790-1f348c4827f4" Jan 20 11:33:41 crc kubenswrapper[4725]: E0120 11:33:41.698414 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"operator\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/infrawatch/smart-gateway-operator:latest\\\"\"" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" podUID="288c5de6-7288-478c-b790-1f348c4827f4" Jan 20 11:33:44 crc kubenswrapper[4725]: I0120 11:33:44.932253 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:44 crc kubenswrapper[4725]: E0120 11:33:44.933250 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:46 crc kubenswrapper[4725]: I0120 11:33:46.707833 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" event={"ID":"653691a1-9088-47bd-97e2-4d2f17f885bf","Type":"ContainerStarted","Data":"9791f25257498b3668ca277987870437a8b97d840ef7e3456f35603613b24107"} Jan 20 11:33:46 crc kubenswrapper[4725]: I0120 11:33:46.730903 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-9d4584887-5t9dx" podStartSLOduration=1.441243935 podStartE2EDuration="22.730880166s" podCreationTimestamp="2026-01-20 11:33:24 +0000 UTC" firstStartedPulling="2026-01-20 11:33:25.125807117 +0000 UTC m=+1733.334129090" lastFinishedPulling="2026-01-20 11:33:46.415443348 +0000 UTC m=+1754.623765321" observedRunningTime="2026-01-20 11:33:46.728653536 +0000 UTC m=+1754.936975509" watchObservedRunningTime="2026-01-20 11:33:46.730880166 +0000 UTC m=+1754.939202139" Jan 20 11:33:56 crc kubenswrapper[4725]: I0120 11:33:56.933017 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:33:56 crc kubenswrapper[4725]: E0120 11:33:56.933952 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:33:58 crc kubenswrapper[4725]: I0120 11:33:58.818624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" event={"ID":"288c5de6-7288-478c-b790-1f348c4827f4","Type":"ContainerStarted","Data":"da4e3cfe0898b44b38d7c038c7438c0d8100cace2c19611d1d7173c81f86732c"} Jan 20 11:33:58 crc kubenswrapper[4725]: I0120 11:33:58.847985 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-86d4f8cb59-xtrqk" podStartSLOduration=1.60321368 podStartE2EDuration="37.847954593s" podCreationTimestamp="2026-01-20 11:33:21 +0000 UTC" firstStartedPulling="2026-01-20 11:33:22.028509383 +0000 UTC m=+1730.236831356" lastFinishedPulling="2026-01-20 11:33:58.273250296 +0000 UTC m=+1766.481572269" observedRunningTime="2026-01-20 11:33:58.844566566 +0000 UTC m=+1767.052888539" watchObservedRunningTime="2026-01-20 11:33:58.847954593 +0000 UTC m=+1767.056276566" Jan 20 11:34:08 crc kubenswrapper[4725]: I0120 11:34:08.932378 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:08 crc kubenswrapper[4725]: E0120 11:34:08.933603 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:13 crc kubenswrapper[4725]: I0120 11:34:13.286888 4725 scope.go:117] "RemoveContainer" containerID="c7d0859d4065010f243a29a233ddba2921cdc8ab64a769f55e2ecc4ca1c5a41a" Jan 20 11:34:13 crc kubenswrapper[4725]: I0120 11:34:13.325380 4725 scope.go:117] "RemoveContainer" containerID="79cfb45a8c90dfaa65e6bb289f91b498d6f80d05aa29a1f6d45fa2050d0f30eb" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.620348 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.626993 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.632757 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-credentials" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.633063 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-credentials" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634065 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-users" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634343 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-dockercfg-w6m24" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634497 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-openstack-ca" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634669 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-interconnect-sasl-config" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.634946 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-inter-router-ca" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.660198 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779233 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779309 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779339 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779370 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779390 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779421 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.779459 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.880901 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.880981 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881044 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881099 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881141 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881180 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.881235 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.882418 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.890289 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.890289 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.891204 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.893026 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.893326 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.911140 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"default-interconnect-68864d46cb-pg5vh\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:16 crc kubenswrapper[4725]: I0120 11:34:16.958438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:34:17 crc kubenswrapper[4725]: I0120 11:34:17.287116 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:34:17 crc kubenswrapper[4725]: I0120 11:34:17.992859 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerStarted","Data":"fc8242d5514e690ee80b2bdcc2ff5977848ca545548efc96d47954b1674d6f08"} Jan 20 11:34:19 crc kubenswrapper[4725]: I0120 11:34:19.933046 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:19 crc kubenswrapper[4725]: E0120 11:34:19.933643 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:26 crc kubenswrapper[4725]: I0120 11:34:26.111323 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerStarted","Data":"c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033"} Jan 20 11:34:26 crc kubenswrapper[4725]: I0120 11:34:26.138415 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" podStartSLOduration=2.453642429 podStartE2EDuration="10.138385808s" podCreationTimestamp="2026-01-20 11:34:16 +0000 UTC" firstStartedPulling="2026-01-20 11:34:17.31773833 +0000 UTC m=+1785.526060303" lastFinishedPulling="2026-01-20 11:34:25.002481709 +0000 UTC m=+1793.210803682" observedRunningTime="2026-01-20 11:34:26.134688562 +0000 UTC m=+1794.343010555" watchObservedRunningTime="2026-01-20 11:34:26.138385808 +0000 UTC m=+1794.346707791" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.502937 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.523964 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.527652 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-1" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.527977 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"serving-certs-ca-bundle" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.528013 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-stf-dockercfg-jjxsd" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.528210 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.528310 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-2" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530147 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-session-secret" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530335 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-web-config" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530408 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"prometheus-default-rulefiles-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530687 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"prometheus-default-tls-assets-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.530821 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-prometheus-proxy-tls" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.543991 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624400 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-web-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624465 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624526 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624552 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624573 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-tls-assets\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7b4f\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-kube-api-access-c7b4f\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624675 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624695 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d31d6ca-dd83-489d-9956-abb0947df80d-config-out\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624718 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.624736 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725326 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725354 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725385 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725417 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-tls-assets\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725465 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-c7b4f\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-kube-api-access-c7b4f\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725527 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d31d6ca-dd83-489d-9956-abb0947df80d-config-out\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725557 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725587 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725634 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-web-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.725671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.727691 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: E0120 11:34:30.728330 4725 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.728358 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: E0120 11:34:30.728411 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls podName:7d31d6ca-dd83-489d-9956-abb0947df80d nodeName:}" failed. No retries permitted until 2026-01-20 11:34:31.228387752 +0000 UTC m=+1799.436709745 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7d31d6ca-dd83-489d-9956-abb0947df80d") : secret "default-prometheus-proxy-tls" not found Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.729203 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.730148 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7d31d6ca-dd83-489d-9956-abb0947df80d-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.736271 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.737448 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.737565 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/908af6317c94b2e5474affd556a5be241a0c727008a51d32804b368dae340079/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.742095 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.743565 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-tls-assets\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.746977 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/7d31d6ca-dd83-489d-9956-abb0947df80d-config-out\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.748162 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7b4f\" (UniqueName: \"kubernetes.io/projected/7d31d6ca-dd83-489d-9956-abb0947df80d-kube-api-access-c7b4f\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.753788 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-web-config\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:30 crc kubenswrapper[4725]: I0120 11:34:30.766037 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-ceb51b36-53ea-416a-89c9-3c7434a988e0\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:31 crc kubenswrapper[4725]: I0120 11:34:31.236274 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:31 crc kubenswrapper[4725]: E0120 11:34:31.236555 4725 secret.go:188] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 20 11:34:31 crc kubenswrapper[4725]: E0120 11:34:31.237456 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls podName:7d31d6ca-dd83-489d-9956-abb0947df80d nodeName:}" failed. No retries permitted until 2026-01-20 11:34:32.23743111 +0000 UTC m=+1800.445753083 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "7d31d6ca-dd83-489d-9956-abb0947df80d") : secret "default-prometheus-proxy-tls" not found Jan 20 11:34:31 crc kubenswrapper[4725]: I0120 11:34:31.932414 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:31 crc kubenswrapper[4725]: E0120 11:34:31.932736 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.252959 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.261214 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/7d31d6ca-dd83-489d-9956-abb0947df80d-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"7d31d6ca-dd83-489d-9956-abb0947df80d\") " pod="service-telemetry/prometheus-default-0" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.397436 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 20 11:34:32 crc kubenswrapper[4725]: I0120 11:34:32.643175 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 20 11:34:33 crc kubenswrapper[4725]: I0120 11:34:33.173197 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"d1405246f054d1947c821cc7c3d161838e6c13d415170f5b0b5bb932d8f89acc"} Jan 20 11:34:40 crc kubenswrapper[4725]: I0120 11:34:40.319725 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"72678a1a44d5458fc7e50e2ea55e25f8b66682610319324db3747cf67d49708a"} Jan 20 11:34:42 crc kubenswrapper[4725]: I0120 11:34:42.941213 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:42 crc kubenswrapper[4725]: E0120 11:34:42.942162 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.805406 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-fxcvg"] Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.806666 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.817504 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-fxcvg"] Jan 20 11:34:44 crc kubenswrapper[4725]: I0120 11:34:44.917583 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fkrmb\" (UniqueName: \"kubernetes.io/projected/c22fff0f-fa8e-40e0-a8dc-a138398b06e7-kube-api-access-fkrmb\") pod \"default-snmp-webhook-6856cfb745-fxcvg\" (UID: \"c22fff0f-fa8e-40e0-a8dc-a138398b06e7\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.019238 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-fkrmb\" (UniqueName: \"kubernetes.io/projected/c22fff0f-fa8e-40e0-a8dc-a138398b06e7-kube-api-access-fkrmb\") pod \"default-snmp-webhook-6856cfb745-fxcvg\" (UID: \"c22fff0f-fa8e-40e0-a8dc-a138398b06e7\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.044508 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-fkrmb\" (UniqueName: \"kubernetes.io/projected/c22fff0f-fa8e-40e0-a8dc-a138398b06e7-kube-api-access-fkrmb\") pod \"default-snmp-webhook-6856cfb745-fxcvg\" (UID: \"c22fff0f-fa8e-40e0-a8dc-a138398b06e7\") " pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.127312 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" Jan 20 11:34:45 crc kubenswrapper[4725]: I0120 11:34:45.367048 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-6856cfb745-fxcvg"] Jan 20 11:34:46 crc kubenswrapper[4725]: I0120 11:34:46.374898 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" event={"ID":"c22fff0f-fa8e-40e0-a8dc-a138398b06e7","Type":"ContainerStarted","Data":"93a372b28c4f0c6d5a862baa1a11854381ab162740d51d354dc13d27dd09e1c2"} Jan 20 11:34:49 crc kubenswrapper[4725]: I0120 11:34:49.401792 4725 generic.go:334] "Generic (PLEG): container finished" podID="7d31d6ca-dd83-489d-9956-abb0947df80d" containerID="72678a1a44d5458fc7e50e2ea55e25f8b66682610319324db3747cf67d49708a" exitCode=0 Jan 20 11:34:49 crc kubenswrapper[4725]: I0120 11:34:49.402130 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerDied","Data":"72678a1a44d5458fc7e50e2ea55e25f8b66682610319324db3747cf67d49708a"} Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.091996 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.101861 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.102406 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.106328 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-tls-assets-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.106431 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-alertmanager-proxy-tls" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107248 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-generated" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107450 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-cluster-tls-config" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107607 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-default-web-config" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.107753 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"alertmanager-stf-dockercfg-49kjc" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282849 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282925 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-web-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-config-volume\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.282983 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283252 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f490a619-9c48-49a0-857b-904084871923-config-out\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dtlxc\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-kube-api-access-dtlxc\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283461 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.283488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.391671 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-config-volume\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392243 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f490a619-9c48-49a0-857b-904084871923-config-out\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392262 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392292 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dtlxc\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-kube-api-access-dtlxc\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392334 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392357 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.392412 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.393222 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-web-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: E0120 11:34:53.393396 4725 secret.go:188] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 20 11:34:53 crc kubenswrapper[4725]: E0120 11:34:53.393519 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls podName:f490a619-9c48-49a0-857b-904084871923 nodeName:}" failed. No retries permitted until 2026-01-20 11:34:53.89349349 +0000 UTC m=+1822.101815453 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "f490a619-9c48-49a0-857b-904084871923") : secret "default-alertmanager-proxy-tls" not found Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.400564 4725 csi_attacher.go:380] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.400609 4725 operation_generator.go:580] "MountVolume.MountDevice succeeded for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/334262ccefad4140c333c19789367f9cb48a75b8cc6e1f6bc07181136c225adc/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.411427 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-web-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.412569 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-config-volume\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.412666 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.415632 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/f490a619-9c48-49a0-857b-904084871923-config-out\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.417693 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-tls-assets\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.417796 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dtlxc\" (UniqueName: \"kubernetes.io/projected/f490a619-9c48-49a0-857b-904084871923-kube-api-access-dtlxc\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.418885 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.462768 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-08b68e4d-5706-4305-9a23-32acf3b55ffb\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.908317 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:53 crc kubenswrapper[4725]: I0120 11:34:53.928430 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/f490a619-9c48-49a0-857b-904084871923-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"f490a619-9c48-49a0-857b-904084871923\") " pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:54 crc kubenswrapper[4725]: I0120 11:34:54.030058 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 20 11:34:56 crc kubenswrapper[4725]: I0120 11:34:56.340123 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 20 11:34:57 crc kubenswrapper[4725]: I0120 11:34:57.494054 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"9a2b0027ff5aeb5779a8c1ad7b4f7cb9efaea130c14f3240351ce7cfa4b1f4b2"} Jan 20 11:34:57 crc kubenswrapper[4725]: I0120 11:34:57.933804 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:34:57 crc kubenswrapper[4725]: E0120 11:34:57.935689 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:34:58 crc kubenswrapper[4725]: I0120 11:34:58.762269 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" event={"ID":"c22fff0f-fa8e-40e0-a8dc-a138398b06e7","Type":"ContainerStarted","Data":"5757bd7654b6e4c606e79a28de828d0b3a966a8ee3b3528d8bac9e6ae3d5dc9d"} Jan 20 11:34:58 crc kubenswrapper[4725]: I0120 11:34:58.800637 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-6856cfb745-fxcvg" podStartSLOduration=3.45375073 podStartE2EDuration="14.800604129s" podCreationTimestamp="2026-01-20 11:34:44 +0000 UTC" firstStartedPulling="2026-01-20 11:34:45.379821688 +0000 UTC m=+1813.588143661" lastFinishedPulling="2026-01-20 11:34:56.726675087 +0000 UTC m=+1824.934997060" observedRunningTime="2026-01-20 11:34:58.793477184 +0000 UTC m=+1827.001799167" watchObservedRunningTime="2026-01-20 11:34:58.800604129 +0000 UTC m=+1827.008926102" Jan 20 11:35:09 crc kubenswrapper[4725]: I0120 11:35:09.926656 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"df082aed2f55c214afd488ebe846d87cb0693d738700a6bbba98647e748c15de"} Jan 20 11:35:11 crc kubenswrapper[4725]: E0120 11:35:11.448353 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/prometheus/prometheus:latest" Jan 20 11:35:11 crc kubenswrapper[4725]: E0120 11:35:11.448963 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:prometheus,Image:quay.io/prometheus/prometheus:latest,Command:[],Args:[--config.file=/etc/prometheus/config_out/prometheus.env.yaml --web.enable-lifecycle --web.route-prefix=/ --web.listen-address=127.0.0.1:9090 --storage.tsdb.retention.time=24h --storage.tsdb.path=/prometheus --web.config.file=/etc/prometheus/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/prometheus/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/prometheus/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-db,ReadOnly:false,MountPath:/prometheus,SubPath:prometheus-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-prometheus-proxy-tls,ReadOnly:true,MountPath:/etc/prometheus/secrets/default-prometheus-proxy-tls,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:true,MountPath:/etc/prometheus/secrets/default-session-secret,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:configmap-serving-certs-ca-bundle,ReadOnly:true,MountPath:/etc/prometheus/configmaps/serving-certs-ca-bundle,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-0,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-0,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-1,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-1,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:prometheus-default-rulefiles-2,ReadOnly:true,MountPath:/etc/prometheus/rules/prometheus-default-rulefiles-2,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/prometheus/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-c7b4f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/healthy; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/healthy; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:5,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[sh -c if [ -x \"$(command -v curl)\" ]; then exec curl --fail http://localhost:9090/-/ready; elif [ -x \"$(command -v wget)\" ]; then exec wget -q -O /dev/null http://localhost:9090/-/ready; else exit 1; fi],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:3,PeriodSeconds:15,SuccessThreshold:1,FailureThreshold:60,TerminationGracePeriodSeconds:nil,},ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod prometheus-default-0_service-telemetry(7d31d6ca-dd83-489d-9956-abb0947df80d): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:35:11 crc kubenswrapper[4725]: I0120 11:35:11.932720 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:11 crc kubenswrapper[4725]: E0120 11:35:11.933292 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:13 crc kubenswrapper[4725]: I0120 11:35:13.396136 4725 scope.go:117] "RemoveContainer" containerID="cd00c1d1c97cef650c244c3d6f0815615e809a8aa5699e75689e183731a91ab5" Jan 20 11:35:13 crc kubenswrapper[4725]: I0120 11:35:13.421259 4725 scope.go:117] "RemoveContainer" containerID="c39d6de3e24d8f3a14c460d9395b3e4c5d0c7f4110899d7ced5dff416dd88a6f" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.851837 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g"] Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.857859 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861602 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-coll-meter-proxy-tls" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861698 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-meter-sg-core-configmap" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861735 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-session-secret" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.861794 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"smart-gateway-dockercfg-wn46n" Jan 20 11:35:17 crc kubenswrapper[4725]: I0120 11:35:17.874654 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g"] Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.052776 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.052901 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pd65f\" (UniqueName: \"kubernetes.io/projected/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-kube-api-access-pd65f\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.052977 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.053060 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.053316 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.154732 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.154794 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.154835 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.154963 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.155113 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls podName:10b6bc99-b2ce-4952-a481-bbabe3a3fc16 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:18.655068148 +0000 UTC m=+1846.863390121 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" (UID: "10b6bc99-b2ce-4952-a481-bbabe3a3fc16") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.155458 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.155566 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.156366 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pd65f\" (UniqueName: \"kubernetes.io/projected/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-kube-api-access-pd65f\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.156302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.165032 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.183124 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pd65f\" (UniqueName: \"kubernetes.io/projected/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-kube-api-access-pd65f\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: I0120 11:35:18.663182 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.663435 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:18 crc kubenswrapper[4725]: E0120 11:35:18.663567 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls podName:10b6bc99-b2ce-4952-a481-bbabe3a3fc16 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:19.663533659 +0000 UTC m=+1847.871855632 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" (UID: "10b6bc99-b2ce-4952-a481-bbabe3a3fc16") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 20 11:35:19 crc kubenswrapper[4725]: I0120 11:35:19.706149 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:19 crc kubenswrapper[4725]: I0120 11:35:19.712071 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/10b6bc99-b2ce-4952-a481-bbabe3a3fc16-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g\" (UID: \"10b6bc99-b2ce-4952-a481-bbabe3a3fc16\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:19 crc kubenswrapper[4725]: I0120 11:35:19.975878 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.271750 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g"] Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.403094 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"09480f9645230bbbc0e55c635977c21a5b4f0d489349232d74325109b2eef5ad"} Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.410452 4725 generic.go:334] "Generic (PLEG): container finished" podID="f490a619-9c48-49a0-857b-904084871923" containerID="df082aed2f55c214afd488ebe846d87cb0693d738700a6bbba98647e748c15de" exitCode=0 Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.410535 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerDied","Data":"df082aed2f55c214afd488ebe846d87cb0693d738700a6bbba98647e748c15de"} Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.586561 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p"] Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.588409 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.616517 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p"] Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.620201 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-meter-sg-core-configmap" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.620378 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-ceil-meter-proxy-tls" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727660 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b74ea17-71c5-47e0-a15e-e963223f11f0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727744 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6b74ea17-71c5-47e0-a15e-e963223f11f0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727791 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qlhmr\" (UniqueName: \"kubernetes.io/projected/6b74ea17-71c5-47e0-a15e-e963223f11f0-kube-api-access-qlhmr\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727905 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:20 crc kubenswrapper[4725]: I0120 11:35:20.727980 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.015994 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b74ea17-71c5-47e0-a15e-e963223f11f0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016089 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6b74ea17-71c5-47e0-a15e-e963223f11f0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016133 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qlhmr\" (UniqueName: \"kubernetes.io/projected/6b74ea17-71c5-47e0-a15e-e963223f11f0-kube-api-access-qlhmr\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016167 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.016252 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.017966 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/6b74ea17-71c5-47e0-a15e-e963223f11f0-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.020811 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/6b74ea17-71c5-47e0-a15e-e963223f11f0-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.027362 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.027961 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.028018 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls podName:6b74ea17-71c5-47e0-a15e-e963223f11f0 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:21.528000034 +0000 UTC m=+1849.736322007 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" (UID: "6b74ea17-71c5-47e0-a15e-e963223f11f0") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.045685 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qlhmr\" (UniqueName: \"kubernetes.io/projected/6b74ea17-71c5-47e0-a15e-e963223f11f0-kube-api-access-qlhmr\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: I0120 11:35:21.627351 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.627614 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:21 crc kubenswrapper[4725]: E0120 11:35:21.628013 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls podName:6b74ea17-71c5-47e0-a15e-e963223f11f0 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:22.627980968 +0000 UTC m=+1850.836302941 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" (UID: "6b74ea17-71c5-47e0-a15e-e963223f11f0") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 20 11:35:22 crc kubenswrapper[4725]: I0120 11:35:22.851186 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:22 crc kubenswrapper[4725]: I0120 11:35:22.877812 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6b74ea17-71c5-47e0-a15e-e963223f11f0-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p\" (UID: \"6b74ea17-71c5-47e0-a15e-e963223f11f0\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:23 crc kubenswrapper[4725]: I0120 11:35:23.020569 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" Jan 20 11:35:23 crc kubenswrapper[4725]: I0120 11:35:23.582711 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p"] Jan 20 11:35:23 crc kubenswrapper[4725]: I0120 11:35:23.892191 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"c1807948eb9f0450bbf79d5b7abc0df4cb8deeb137fad6aa5f0f7f4580a1680d"} Jan 20 11:35:25 crc kubenswrapper[4725]: I0120 11:35:25.932185 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:25 crc kubenswrapper[4725]: E0120 11:35:25.932695 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.229194 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7"] Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.231884 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.234388 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7"] Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.235749 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-sens-meter-sg-core-configmap" Jan 20 11:35:29 crc kubenswrapper[4725]: I0120 11:35:29.238302 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-cloud1-sens-meter-proxy-tls" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.025853 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/14922311-0e93-4bf9-8980-72baefd93497-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.025915 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6qsvd\" (UniqueName: \"kubernetes.io/projected/14922311-0e93-4bf9-8980-72baefd93497-kube-api-access-6qsvd\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.025985 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.026034 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.026070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/14922311-0e93-4bf9-8980-72baefd93497-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.128113 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.128210 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.128284 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/14922311-0e93-4bf9-8980-72baefd93497-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.128423 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.128583 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls podName:14922311-0e93-4bf9-8980-72baefd93497 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:30.628548881 +0000 UTC m=+1858.836871024 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" (UID: "14922311-0e93-4bf9-8980-72baefd93497") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.129457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/14922311-0e93-4bf9-8980-72baefd93497-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.129615 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/14922311-0e93-4bf9-8980-72baefd93497-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.129669 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-6qsvd\" (UniqueName: \"kubernetes.io/projected/14922311-0e93-4bf9-8980-72baefd93497-kube-api-access-6qsvd\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.130220 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/14922311-0e93-4bf9-8980-72baefd93497-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.138069 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.156809 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-6qsvd\" (UniqueName: \"kubernetes.io/projected/14922311-0e93-4bf9-8980-72baefd93497-kube-api-access-6qsvd\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.185955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"a32622a14d74d1fef0b9d8644fbb1668a79a67a9b61cc46eccad64e34247dc3d"} Jan 20 11:35:30 crc kubenswrapper[4725]: I0120 11:35:30.638544 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.638822 4725 secret.go:188] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:30 crc kubenswrapper[4725]: E0120 11:35:30.638895 4725 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls podName:14922311-0e93-4bf9-8980-72baefd93497 nodeName:}" failed. No retries permitted until 2026-01-20 11:35:31.638875588 +0000 UTC m=+1859.847197561 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" (UID: "14922311-0e93-4bf9-8980-72baefd93497") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 20 11:35:32 crc kubenswrapper[4725]: I0120 11:35:32.249231 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:32 crc kubenswrapper[4725]: I0120 11:35:32.259109 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/14922311-0e93-4bf9-8980-72baefd93497-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7\" (UID: \"14922311-0e93-4bf9-8980-72baefd93497\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:32 crc kubenswrapper[4725]: I0120 11:35:32.278606 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.497961 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm"] Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.500046 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.504927 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-coll-event-sg-core-configmap" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.505194 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"elasticsearch-es-cert" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.510939 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm"] Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.608940 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/739b7c2c-b11b-4260-a184-7dd184677dad-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.609704 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/739b7c2c-b11b-4260-a184-7dd184677dad-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.609787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/739b7c2c-b11b-4260-a184-7dd184677dad-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.609821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tpd67\" (UniqueName: \"kubernetes.io/projected/739b7c2c-b11b-4260-a184-7dd184677dad-kube-api-access-tpd67\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.711970 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/739b7c2c-b11b-4260-a184-7dd184677dad-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712069 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-tpd67\" (UniqueName: \"kubernetes.io/projected/739b7c2c-b11b-4260-a184-7dd184677dad-kube-api-access-tpd67\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712154 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/739b7c2c-b11b-4260-a184-7dd184677dad-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712192 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/739b7c2c-b11b-4260-a184-7dd184677dad-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.712976 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/739b7c2c-b11b-4260-a184-7dd184677dad-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.713510 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/739b7c2c-b11b-4260-a184-7dd184677dad-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.722860 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/739b7c2c-b11b-4260-a184-7dd184677dad-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.734955 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-tpd67\" (UniqueName: \"kubernetes.io/projected/739b7c2c-b11b-4260-a184-7dd184677dad-kube-api-access-tpd67\") pod \"default-cloud1-coll-event-smartgateway-ff457bf89-458zm\" (UID: \"739b7c2c-b11b-4260-a184-7dd184677dad\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:38 crc kubenswrapper[4725]: I0120 11:35:38.827588 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" Jan 20 11:35:39 crc kubenswrapper[4725]: I0120 11:35:39.270860 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7"] Jan 20 11:35:40 crc kubenswrapper[4725]: W0120 11:35:40.690667 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod14922311_0e93_4bf9_8980_72baefd93497.slice/crio-76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36 WatchSource:0}: Error finding container 76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36: Status 404 returned error can't find the container with id 76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36 Jan 20 11:35:40 crc kubenswrapper[4725]: I0120 11:35:40.934563 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:40 crc kubenswrapper[4725]: E0120 11:35:40.934874 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.024294 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q"] Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.025716 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.034673 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"default-cloud1-ceil-event-sg-core-configmap" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036461 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qt9h8\" (UniqueName: \"kubernetes.io/projected/f84a2726-80cb-4393-84ca-d901b4ee446c-kube-api-access-qt9h8\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036559 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f84a2726-80cb-4393-84ca-d901b4ee446c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036652 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f84a2726-80cb-4393-84ca-d901b4ee446c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.036805 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f84a2726-80cb-4393-84ca-d901b4ee446c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.046029 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q"] Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359590 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f84a2726-80cb-4393-84ca-d901b4ee446c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359726 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qt9h8\" (UniqueName: \"kubernetes.io/projected/f84a2726-80cb-4393-84ca-d901b4ee446c-kube-api-access-qt9h8\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359767 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f84a2726-80cb-4393-84ca-d901b4ee446c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.359791 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f84a2726-80cb-4393-84ca-d901b4ee446c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.360717 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/f84a2726-80cb-4393-84ca-d901b4ee446c-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.360945 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/f84a2726-80cb-4393-84ca-d901b4ee446c-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.371613 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/f84a2726-80cb-4393-84ca-d901b4ee446c-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.378392 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"76b997e55ce106f2e175cf6c578f704d2627263282a478ea6107f55b0ec5bd36"} Jan 20 11:35:41 crc kubenswrapper[4725]: I0120 11:35:41.411554 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qt9h8\" (UniqueName: \"kubernetes.io/projected/f84a2726-80cb-4393-84ca-d901b4ee446c-kube-api-access-qt9h8\") pod \"default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q\" (UID: \"f84a2726-80cb-4393-84ca-d901b4ee446c\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:42 crc kubenswrapper[4725]: I0120 11:35:42.096550 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" Jan 20 11:35:42 crc kubenswrapper[4725]: E0120 11:35:42.269376 4725 log.go:32] "PullImage from image service failed" err="rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" image="quay.io/prometheus/alertmanager:latest" Jan 20 11:35:42 crc kubenswrapper[4725]: E0120 11:35:42.270208 4725 kuberuntime_manager.go:1274] "Unhandled Error" err="container &Container{Name:alertmanager,Image:quay.io/prometheus/alertmanager:latest,Command:[],Args:[--config.file=/etc/alertmanager/config_out/alertmanager.env.yaml --storage.path=/alertmanager --data.retention=120h --cluster.listen-address= --web.listen-address=127.0.0.1:9093 --web.route-prefix=/ --cluster.label=service-telemetry/default --cluster.peer=alertmanager-default-0.alertmanager-operated:9094 --cluster.reconnect-timeout=5m --web.config.file=/etc/alertmanager/web_config/web-config.yaml],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:mesh-tcp,HostPort:0,ContainerPort:9094,Protocol:TCP,HostIP:,},ContainerPort{Name:mesh-udp,HostPort:0,ContainerPort:9094,Protocol:UDP,HostIP:,},},Env:[]EnvVar{EnvVar{Name:POD_IP,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:status.podIP,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{memory: {{209715200 0} {} BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config-volume,ReadOnly:false,MountPath:/etc/alertmanager/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:config-out,ReadOnly:true,MountPath:/etc/alertmanager/config_out,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tls-assets,ReadOnly:true,MountPath:/etc/alertmanager/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:alertmanager-default-db,ReadOnly:false,MountPath:/alertmanager,SubPath:alertmanager-db,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-alertmanager-proxy-tls,ReadOnly:true,MountPath:/etc/alertmanager/secrets/default-alertmanager-proxy-tls,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:secret-default-session-secret,ReadOnly:true,MountPath:/etc/alertmanager/secrets/default-session-secret,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:web-config,ReadOnly:true,MountPath:/etc/alertmanager/web_config/web-config.yaml,SubPath:web-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:cluster-tls-config,ReadOnly:true,MountPath:/etc/alertmanager/cluster_tls_config/cluster-tls-config.yaml,SubPath:cluster-tls-config.yaml,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dtlxc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:Always,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000670000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:*false,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:FallbackToLogsOnError,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod alertmanager-default-0_service-telemetry(f490a619-9c48-49a0-857b-904084871923): ErrImagePull: rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled" logger="UnhandledError" Jan 20 11:35:42 crc kubenswrapper[4725]: I0120 11:35:42.377348 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm"] Jan 20 11:35:42 crc kubenswrapper[4725]: I0120 11:35:42.650489 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q"] Jan 20 11:35:42 crc kubenswrapper[4725]: E0120 11:35:42.662467 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"prometheus\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/prometheus-default-0" podUID="7d31d6ca-dd83-489d-9956-abb0947df80d" Jan 20 11:35:42 crc kubenswrapper[4725]: W0120 11:35:42.741736 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podf84a2726_80cb_4393_84ca_d901b4ee446c.slice/crio-cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4 WatchSource:0}: Error finding container cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4: Status 404 returned error can't find the container with id cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4 Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.404440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"06025181e7d1785f1eb470fbc77262ed1b338faab91737ca343db668e1da738f"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.406321 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"fd9ffc31121519069aab88569a92795991439a2dab1cfe307a62785a7775eed8"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.409710 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"60897b0526705a2bdce96de8120b2996c6f51009d27d30aefb09adbcc70ac9e2"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.417966 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"0ed63706cd2277df0141d3ae50126099eb108aac690c3d09a51e5e52583aeace"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.421039 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"8cfe9fcda51a6cb8fa2fa3b7829b9ab0376307df990d61d5332c1a2f4369185c"} Jan 20 11:35:43 crc kubenswrapper[4725]: I0120 11:35:43.422108 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"cc23a70788c717bfa78180b2795518af3610c3990a260a35b8363f1a8063cbd4"} Jan 20 11:35:44 crc kubenswrapper[4725]: I0120 11:35:44.434003 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.459670 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.470193 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.475644 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"7d31d6ca-dd83-489d-9956-abb0947df80d","Type":"ContainerStarted","Data":"08463e2706f1274310390e99a78738fc5eb6369194877cd2f67e4058ae8a432d"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.480044 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.485492 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165"} Jan 20 11:35:45 crc kubenswrapper[4725]: I0120 11:35:45.527684 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.173438717 podStartE2EDuration="1m16.527650712s" podCreationTimestamp="2026-01-20 11:34:29 +0000 UTC" firstStartedPulling="2026-01-20 11:34:32.64854933 +0000 UTC m=+1800.856871303" lastFinishedPulling="2026-01-20 11:35:45.002761325 +0000 UTC m=+1873.211083298" observedRunningTime="2026-01-20 11:35:45.515789058 +0000 UTC m=+1873.724111021" watchObservedRunningTime="2026-01-20 11:35:45.527650712 +0000 UTC m=+1873.735972695" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.398950 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.399578 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.480046 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.521939 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"54b53c28ca160cc4b6173817b4e4bfe5c780f0e152b146502bf9e2df7e4447d2"} Jan 20 11:35:47 crc kubenswrapper[4725]: I0120 11:35:47.592330 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 20 11:35:51 crc kubenswrapper[4725]: E0120 11:35:51.615722 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ErrImagePull: \"rpc error: code = Canceled desc = copying system image from manifest list: copying config: context canceled\"" pod="service-telemetry/alertmanager-default-0" podUID="f490a619-9c48-49a0-857b-904084871923" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.164887 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"66329f3e90880289692694b45ebd6bf2e64cef907cc78afe5beb6936040098ab"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.172389 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"b14e74a778b54bb58919bce0d6c9488250e61d2b9e051f515581fc0d551630c6"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.191222 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"97d36175ed4533e74d822f61e733dcd1ca814923e08ac5ca15b90c1e2d54406f"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.195328 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"f578675e78ff6f4f683d9738a43c02568cab6e08b81345f7e1a8019fd6a79081"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.208625 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"89af6fb1acca926234ff8753cf7ae0cd7083606c275a08159c5bcfa057659025"} Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.213884 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"b54981290931d4392f20f30488dfb1fb7da473a0e2b8274fcedb37ebfd1d216a"} Jan 20 11:35:52 crc kubenswrapper[4725]: E0120 11:35:52.215883 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"alertmanager\" with ImagePullBackOff: \"Back-off pulling image \\\"quay.io/prometheus/alertmanager:latest\\\"\"" pod="service-telemetry/alertmanager-default-0" podUID="f490a619-9c48-49a0-857b-904084871923" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.217719 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" podStartSLOduration=5.33374102 podStartE2EDuration="14.21767643s" podCreationTimestamp="2026-01-20 11:35:38 +0000 UTC" firstStartedPulling="2026-01-20 11:35:42.415395996 +0000 UTC m=+1870.623717959" lastFinishedPulling="2026-01-20 11:35:51.299331406 +0000 UTC m=+1879.507653369" observedRunningTime="2026-01-20 11:35:52.203549835 +0000 UTC m=+1880.411871808" watchObservedRunningTime="2026-01-20 11:35:52.21767643 +0000 UTC m=+1880.425998403" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.232372 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" podStartSLOduration=4.62839938 podStartE2EDuration="32.232346613s" podCreationTimestamp="2026-01-20 11:35:20 +0000 UTC" firstStartedPulling="2026-01-20 11:35:23.601990602 +0000 UTC m=+1851.810312575" lastFinishedPulling="2026-01-20 11:35:51.205937835 +0000 UTC m=+1879.414259808" observedRunningTime="2026-01-20 11:35:52.23067554 +0000 UTC m=+1880.438997513" watchObservedRunningTime="2026-01-20 11:35:52.232346613 +0000 UTC m=+1880.440668586" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.263510 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" podStartSLOduration=12.731688865 podStartE2EDuration="23.263480224s" podCreationTimestamp="2026-01-20 11:35:29 +0000 UTC" firstStartedPulling="2026-01-20 11:35:40.704234583 +0000 UTC m=+1868.912556556" lastFinishedPulling="2026-01-20 11:35:51.236025942 +0000 UTC m=+1879.444347915" observedRunningTime="2026-01-20 11:35:52.263255617 +0000 UTC m=+1880.471577590" watchObservedRunningTime="2026-01-20 11:35:52.263480224 +0000 UTC m=+1880.471802197" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.305763 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" podStartSLOduration=4.297600789 podStartE2EDuration="35.305735315s" podCreationTimestamp="2026-01-20 11:35:17 +0000 UTC" firstStartedPulling="2026-01-20 11:35:20.292215643 +0000 UTC m=+1848.500537616" lastFinishedPulling="2026-01-20 11:35:51.300350169 +0000 UTC m=+1879.508672142" observedRunningTime="2026-01-20 11:35:52.299758057 +0000 UTC m=+1880.508080050" watchObservedRunningTime="2026-01-20 11:35:52.305735315 +0000 UTC m=+1880.514057288" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.337350 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" podStartSLOduration=2.915387005 podStartE2EDuration="11.337324401s" podCreationTimestamp="2026-01-20 11:35:41 +0000 UTC" firstStartedPulling="2026-01-20 11:35:42.764196565 +0000 UTC m=+1870.972518528" lastFinishedPulling="2026-01-20 11:35:51.186133961 +0000 UTC m=+1879.394455924" observedRunningTime="2026-01-20 11:35:52.334804781 +0000 UTC m=+1880.543126754" watchObservedRunningTime="2026-01-20 11:35:52.337324401 +0000 UTC m=+1880.545646374" Jan 20 11:35:52 crc kubenswrapper[4725]: I0120 11:35:52.937702 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:35:52 crc kubenswrapper[4725]: E0120 11:35:52.937963 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:35:56 crc kubenswrapper[4725]: I0120 11:35:56.599946 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"f490a619-9c48-49a0-857b-904084871923","Type":"ContainerStarted","Data":"e7a78ef86418e44d2998c4a3a047af9b9fad08bcc5d9709e4ac74eb55dfde9e1"} Jan 20 11:35:56 crc kubenswrapper[4725]: I0120 11:35:56.642242 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=30.458544344 podStartE2EDuration="1m5.642214047s" podCreationTimestamp="2026-01-20 11:34:51 +0000 UTC" firstStartedPulling="2026-01-20 11:35:20.412972377 +0000 UTC m=+1848.621294350" lastFinishedPulling="2026-01-20 11:35:55.59664208 +0000 UTC m=+1883.804964053" observedRunningTime="2026-01-20 11:35:56.633840462 +0000 UTC m=+1884.842162455" watchObservedRunningTime="2026-01-20 11:35:56.642214047 +0000 UTC m=+1884.850536020" Jan 20 11:35:59 crc kubenswrapper[4725]: I0120 11:35:59.983732 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:35:59 crc kubenswrapper[4725]: I0120 11:35:59.984501 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" containerID="cri-o://c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033" gracePeriod=30 Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.180358 4725 generic.go:334] "Generic (PLEG): container finished" podID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerID="c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033" exitCode=0 Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.180440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerDied","Data":"c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033"} Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.186002 4725 generic.go:334] "Generic (PLEG): container finished" podID="10b6bc99-b2ce-4952-a481-bbabe3a3fc16" containerID="213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5" exitCode=0 Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.186058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerDied","Data":"213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5"} Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.187041 4725 scope.go:117] "RemoveContainer" containerID="213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.486041 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566689 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566761 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566796 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566889 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.566992 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.567063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.568205 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") pod \"a7ed1b92-041f-4075-bbc5-89e61158d803\" (UID: \"a7ed1b92-041f-4075-bbc5-89e61158d803\") " Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.568415 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.568821 4725 reconciler_common.go:293] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.575525 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.575562 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.579920 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66" (OuterVolumeSpecName: "kube-api-access-ndc66") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "kube-api-access-ndc66". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.587247 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.598774 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.602938 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "a7ed1b92-041f-4075-bbc5-89e61158d803" (UID: "a7ed1b92-041f-4075-bbc5-89e61158d803"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670395 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ndc66\" (UniqueName: \"kubernetes.io/projected/a7ed1b92-041f-4075-bbc5-89e61158d803-kube-api-access-ndc66\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670448 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670468 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670487 4725 reconciler_common.go:293] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670509 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:00 crc kubenswrapper[4725]: I0120 11:36:00.670524 4725 reconciler_common.go:293] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/a7ed1b92-041f-4075-bbc5-89e61158d803-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.153848 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-mqfr7"] Jan 20 11:36:01 crc kubenswrapper[4725]: E0120 11:36:01.154329 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.154388 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.154581 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" containerName="default-interconnect" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.155352 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.175697 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-mqfr7"] Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179443 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ntl9k\" (UniqueName: \"kubernetes.io/projected/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-kube-api-access-ntl9k\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179482 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179553 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-config\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179590 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179618 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-users\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179649 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.179679 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.200692 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" event={"ID":"a7ed1b92-041f-4075-bbc5-89e61158d803","Type":"ContainerDied","Data":"fc8242d5514e690ee80b2bdcc2ff5977848ca545548efc96d47954b1674d6f08"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.200786 4725 scope.go:117] "RemoveContainer" containerID="c14bd4fa6ae8b5c49bbef4c942582542d141988e84936a51137bc3a20377b033" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.201346 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-pg5vh" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.205617 4725 generic.go:334] "Generic (PLEG): container finished" podID="14922311-0e93-4bf9-8980-72baefd93497" containerID="0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.205768 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerDied","Data":"0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.207131 4725 scope.go:117] "RemoveContainer" containerID="0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.236575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.240312 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.241883 4725 generic.go:334] "Generic (PLEG): container finished" podID="f84a2726-80cb-4393-84ca-d901b4ee446c" containerID="bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.242038 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerDied","Data":"bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.251063 4725 generic.go:334] "Generic (PLEG): container finished" podID="739b7c2c-b11b-4260-a184-7dd184677dad" containerID="a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.251199 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerDied","Data":"a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.251579 4725 scope.go:117] "RemoveContainer" containerID="bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.252137 4725 scope.go:117] "RemoveContainer" containerID="a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.254321 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-pg5vh"] Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.275875 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerDied","Data":"85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5"} Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.276930 4725 scope.go:117] "RemoveContainer" containerID="85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.277851 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b74ea17-71c5-47e0-a15e-e963223f11f0" containerID="85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5" exitCode=0 Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.309181 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.310750 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.310944 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ntl9k\" (UniqueName: \"kubernetes.io/projected/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-kube-api-access-ntl9k\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.310982 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.318502 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-config\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.319356 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.319424 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-users\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.319731 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-config\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.337949 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.338184 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-sasl-users\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.342440 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-credentials\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.343035 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-openstack-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.345302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ntl9k\" (UniqueName: \"kubernetes.io/projected/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-kube-api-access-ntl9k\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.352211 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/5b2eb85b-dd29-4dc6-9d02-1087e7119ae7-default-interconnect-inter-router-ca\") pod \"default-interconnect-68864d46cb-mqfr7\" (UID: \"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7\") " pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.474839 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" Jan 20 11:36:01 crc kubenswrapper[4725]: I0120 11:36:01.918388 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-68864d46cb-mqfr7"] Jan 20 11:36:01 crc kubenswrapper[4725]: W0120 11:36:01.941027 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod5b2eb85b_dd29_4dc6_9d02_1087e7119ae7.slice/crio-27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337 WatchSource:0}: Error finding container 27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337: Status 404 returned error can't find the container with id 27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337 Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.291782 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.298526 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.305139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.314411 4725 generic.go:334] "Generic (PLEG): container finished" podID="10b6bc99-b2ce-4952-a481-bbabe3a3fc16" containerID="f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24" exitCode=0 Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.314499 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerDied","Data":"f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.314811 4725 scope.go:117] "RemoveContainer" containerID="213111dfdf0a0ff0f0b45369175eace3f63e8724046dbbebe7bbe9f19ce599f5" Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.315782 4725 scope.go:117] "RemoveContainer" containerID="f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24" Jan 20 11:36:02 crc kubenswrapper[4725]: E0120 11:36:02.316579 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_service-telemetry(10b6bc99-b2ce-4952-a481-bbabe3a3fc16)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" podUID="10b6bc99-b2ce-4952-a481-bbabe3a3fc16" Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.319063 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" event={"ID":"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7","Type":"ContainerStarted","Data":"ef50400501ae4fe7c570eb4d055f1e801792ee905dca11d7fff720f1b1cc625a"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.319161 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" event={"ID":"5b2eb85b-dd29-4dc6-9d02-1087e7119ae7","Type":"ContainerStarted","Data":"27a5f7c7c99d496bcece77ecc64842963e8c162c5ad5a16b80041c4dc8cf2337"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.331045 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7"} Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.519749 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-68864d46cb-mqfr7" podStartSLOduration=3.519715451 podStartE2EDuration="3.519715451s" podCreationTimestamp="2026-01-20 11:35:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:36:02.486153608 +0000 UTC m=+1890.694475601" watchObservedRunningTime="2026-01-20 11:36:02.519715451 +0000 UTC m=+1890.728037434" Jan 20 11:36:02 crc kubenswrapper[4725]: I0120 11:36:02.999898 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7ed1b92-041f-4075-bbc5-89e61158d803" path="/var/lib/kubelet/pods/a7ed1b92-041f-4075-bbc5-89e61158d803/volumes" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.276405 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod739b7c2c_b11b_4260_a184_7dd184677dad.slice/crio-conmon-a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512.scope\": RecentStats: unable to find data in memory cache]" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.346084 4725 generic.go:334] "Generic (PLEG): container finished" podID="6b74ea17-71c5-47e0-a15e-e963223f11f0" containerID="933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.346653 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerDied","Data":"933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.346704 4725 scope.go:117] "RemoveContainer" containerID="85b9d5f0aedf2fae91704fa29f2787e20e867b09945f48a97ca1a50eab2049b5" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.347632 4725 scope.go:117] "RemoveContainer" containerID="933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.347971 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_service-telemetry(6b74ea17-71c5-47e0-a15e-e963223f11f0)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" podUID="6b74ea17-71c5-47e0-a15e-e963223f11f0" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.360200 4725 generic.go:334] "Generic (PLEG): container finished" podID="14922311-0e93-4bf9-8980-72baefd93497" containerID="a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.360815 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerDied","Data":"a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.362101 4725 scope.go:117] "RemoveContainer" containerID="a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.362499 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_service-telemetry(14922311-0e93-4bf9-8980-72baefd93497)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" podUID="14922311-0e93-4bf9-8980-72baefd93497" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.377239 4725 generic.go:334] "Generic (PLEG): container finished" podID="f84a2726-80cb-4393-84ca-d901b4ee446c" containerID="868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.377310 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerDied","Data":"868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.378356 4725 scope.go:117] "RemoveContainer" containerID="868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.398697 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_service-telemetry(f84a2726-80cb-4393-84ca-d901b4ee446c)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" podUID="f84a2726-80cb-4393-84ca-d901b4ee446c" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.399003 4725 generic.go:334] "Generic (PLEG): container finished" podID="739b7c2c-b11b-4260-a184-7dd184677dad" containerID="a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512" exitCode=0 Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.399172 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerDied","Data":"a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512"} Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.399968 4725 scope.go:117] "RemoveContainer" containerID="a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512" Jan 20 11:36:03 crc kubenswrapper[4725]: E0120 11:36:03.400370 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-ff457bf89-458zm_service-telemetry(739b7c2c-b11b-4260-a184-7dd184677dad)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" podUID="739b7c2c-b11b-4260-a184-7dd184677dad" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.430370 4725 scope.go:117] "RemoveContainer" containerID="0a94f1e7f7b86ee205276d80219c86dcc78f7edb37dbdbc85923fe9ded4fda65" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.501740 4725 scope.go:117] "RemoveContainer" containerID="bb62f6c31d67b824a936c21441a5c5ca97da949df73f348a3cfc81003a086165" Jan 20 11:36:03 crc kubenswrapper[4725]: I0120 11:36:03.551871 4725 scope.go:117] "RemoveContainer" containerID="a814cc228323d0e0e96ee552884a26d9b2bed4b56fadbfdeef8c6d236729c1b3" Jan 20 11:36:04 crc kubenswrapper[4725]: I0120 11:36:04.932347 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:04 crc kubenswrapper[4725]: E0120 11:36:04.932592 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.891651 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.893325 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.895877 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"qdr-test-config" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.896214 4725 reflector.go:368] Caches populated for *v1.Secret from object-"service-telemetry"/"default-interconnect-selfsigned" Jan 20 11:36:06 crc kubenswrapper[4725]: I0120 11:36:06.920453 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.066225 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/879163eb-1e0f-4030-aec9-69331c2e5ecd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.066307 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/879163eb-1e0f-4030-aec9-69331c2e5ecd-qdr-test-config\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.066680 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5q8b2\" (UniqueName: \"kubernetes.io/projected/879163eb-1e0f-4030-aec9-69331c2e5ecd-kube-api-access-5q8b2\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.168490 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/879163eb-1e0f-4030-aec9-69331c2e5ecd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.168575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/879163eb-1e0f-4030-aec9-69331c2e5ecd-qdr-test-config\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.168655 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-5q8b2\" (UniqueName: \"kubernetes.io/projected/879163eb-1e0f-4030-aec9-69331c2e5ecd-kube-api-access-5q8b2\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.170633 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/879163eb-1e0f-4030-aec9-69331c2e5ecd-qdr-test-config\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.183829 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/879163eb-1e0f-4030-aec9-69331c2e5ecd-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.198916 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-5q8b2\" (UniqueName: \"kubernetes.io/projected/879163eb-1e0f-4030-aec9-69331c2e5ecd-kube-api-access-5q8b2\") pod \"qdr-test\" (UID: \"879163eb-1e0f-4030-aec9-69331c2e5ecd\") " pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.245533 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 20 11:36:07 crc kubenswrapper[4725]: I0120 11:36:07.535706 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 20 11:36:08 crc kubenswrapper[4725]: I0120 11:36:08.465961 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"879163eb-1e0f-4030-aec9-69331c2e5ecd","Type":"ContainerStarted","Data":"da993ebcd8b9228232ab084c858484929b255999d4f0715660ec5ee17652eb67"} Jan 20 11:36:12 crc kubenswrapper[4725]: I0120 11:36:12.996294 4725 scope.go:117] "RemoveContainer" containerID="f808b38c632a029f482d7ac80acbc52d0b763366efa16b63d7f6d3e8d5a6ff24" Jan 20 11:36:14 crc kubenswrapper[4725]: I0120 11:36:14.932102 4725 scope.go:117] "RemoveContainer" containerID="868a1959d9b970de06b85cc00450471de5aa0fb783297a7e675acc22855a85f7" Jan 20 11:36:14 crc kubenswrapper[4725]: I0120 11:36:14.932736 4725 scope.go:117] "RemoveContainer" containerID="a88b9434de7bd9bfbc95d3f2b49a82c5e17930f19185b443e13efe132a738512" Jan 20 11:36:16 crc kubenswrapper[4725]: I0120 11:36:16.933122 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:16 crc kubenswrapper[4725]: I0120 11:36:16.933220 4725 scope.go:117] "RemoveContainer" containerID="a468fde69b7db36725df7d207536a0b73e2e6b8fa64a9e1e72baf69ef7b64605" Jan 20 11:36:16 crc kubenswrapper[4725]: E0120 11:36:16.933529 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:17 crc kubenswrapper[4725]: I0120 11:36:17.932508 4725 scope.go:117] "RemoveContainer" containerID="933a5a21d1158e8f5c0e37d8c018a75abf7f52832dd8eee14047121d6d55c0cd" Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.547557 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p" event={"ID":"6b74ea17-71c5-47e0-a15e-e963223f11f0","Type":"ContainerStarted","Data":"4c228724f4e58a02ba325639e1f96f37ea92c44426e3e68588ae4f2d2f4ac377"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.551584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7" event={"ID":"14922311-0e93-4bf9-8980-72baefd93497","Type":"ContainerStarted","Data":"7ec44d266962c93700d463ae8888809f4e095d672e0f139d7b68c8bf45fb1aa5"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.597315 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g" event={"ID":"10b6bc99-b2ce-4952-a481-bbabe3a3fc16","Type":"ContainerStarted","Data":"401c2e3ccec1b76d610118cfaaaf7b350b782353b93f785559a1c4b50a8c6ae6"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.605813 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q" event={"ID":"f84a2726-80cb-4393-84ca-d901b4ee446c","Type":"ContainerStarted","Data":"0f71d171fc42eedd96b197809e210772b9866ec69c7bf42b7ddc238b4cc06796"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.622756 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"879163eb-1e0f-4030-aec9-69331c2e5ecd","Type":"ContainerStarted","Data":"549e75fbed0185c0c494da95bccd3cd34e90cffcf46c0dc0491ab85ac7ed11cc"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.640766 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-ff457bf89-458zm" event={"ID":"739b7c2c-b11b-4260-a184-7dd184677dad","Type":"ContainerStarted","Data":"9118a6dea9c5c8d5d0335ad2409ad563a8078a6b0c6a8d4a32446b247a75d423"} Jan 20 11:36:20 crc kubenswrapper[4725]: I0120 11:36:20.744394 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.290329099 podStartE2EDuration="14.74435841s" podCreationTimestamp="2026-01-20 11:36:06 +0000 UTC" firstStartedPulling="2026-01-20 11:36:07.545727692 +0000 UTC m=+1895.754049665" lastFinishedPulling="2026-01-20 11:36:19.999757003 +0000 UTC m=+1908.208078976" observedRunningTime="2026-01-20 11:36:20.741652414 +0000 UTC m=+1908.949974407" watchObservedRunningTime="2026-01-20 11:36:20.74435841 +0000 UTC m=+1908.952680383" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.092854 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-phjxw"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.094557 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.097917 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.097998 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.098901 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.098910 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.099249 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.099503 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.173291 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-phjxw"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208253 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208285 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208322 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208344 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208376 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.208408 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.310941 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311036 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311187 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311228 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311264 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311314 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.311346 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.312387 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.312486 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.312515 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.313166 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.313448 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.313867 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.340717 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"stf-smoketest-smoke1-phjxw\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.417858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.503158 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.508450 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.516111 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.617228 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"curl\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.719426 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"curl\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.751023 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"curl\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " pod="service-telemetry/curl" Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.805245 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-phjxw"] Jan 20 11:36:21 crc kubenswrapper[4725]: W0120 11:36:21.817622 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod3e274138_1522_41f2_8021_9f425af23d2e.slice/crio-e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be WatchSource:0}: Error finding container e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be: Status 404 returned error can't find the container with id e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be Jan 20 11:36:21 crc kubenswrapper[4725]: I0120 11:36:21.855971 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:22 crc kubenswrapper[4725]: I0120 11:36:22.368035 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 20 11:36:22 crc kubenswrapper[4725]: I0120 11:36:22.675613 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerStarted","Data":"e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be"} Jan 20 11:36:22 crc kubenswrapper[4725]: I0120 11:36:22.681149 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"650f5183-3a46-4da1-befe-a96b43c85a6e","Type":"ContainerStarted","Data":"a490f5b22bfba6cd0b89d78a402f51a5e98d798b3da34bb7f8ae944b4ab7f5f4"} Jan 20 11:36:31 crc kubenswrapper[4725]: I0120 11:36:27.933224 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:31 crc kubenswrapper[4725]: E0120 11:36:27.934422 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:34 crc kubenswrapper[4725]: I0120 11:36:34.955303 4725 generic.go:334] "Generic (PLEG): container finished" podID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerID="86066600570f530a32f6940cdab38a7b29b48d19dbe081cc9e4d1ce34109f5bc" exitCode=0 Jan 20 11:36:34 crc kubenswrapper[4725]: I0120 11:36:34.956293 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"650f5183-3a46-4da1-befe-a96b43c85a6e","Type":"ContainerDied","Data":"86066600570f530a32f6940cdab38a7b29b48d19dbe081cc9e4d1ce34109f5bc"} Jan 20 11:36:41 crc kubenswrapper[4725]: I0120 11:36:41.938895 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:41 crc kubenswrapper[4725]: E0120 11:36:41.939971 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.689229 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.777960 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") pod \"650f5183-3a46-4da1-befe-a96b43c85a6e\" (UID: \"650f5183-3a46-4da1-befe-a96b43c85a6e\") " Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.784289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh" (OuterVolumeSpecName: "kube-api-access-bt5nh") pod "650f5183-3a46-4da1-befe-a96b43c85a6e" (UID: "650f5183-3a46-4da1-befe-a96b43c85a6e"). InnerVolumeSpecName "kube-api-access-bt5nh". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.848111 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_650f5183-3a46-4da1-befe-a96b43c85a6e/curl/0.log" Jan 20 11:36:43 crc kubenswrapper[4725]: I0120 11:36:43.880804 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-bt5nh\" (UniqueName: \"kubernetes.io/projected/650f5183-3a46-4da1-befe-a96b43c85a6e-kube-api-access-bt5nh\") on node \"crc\" DevicePath \"\"" Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.055166 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerStarted","Data":"2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c"} Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.057601 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"650f5183-3a46-4da1-befe-a96b43c85a6e","Type":"ContainerDied","Data":"a490f5b22bfba6cd0b89d78a402f51a5e98d798b3da34bb7f8ae944b4ab7f5f4"} Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.057642 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a490f5b22bfba6cd0b89d78a402f51a5e98d798b3da34bb7f8ae944b4ab7f5f4" Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.057681 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 20 11:36:44 crc kubenswrapper[4725]: I0120 11:36:44.091542 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:36:52 crc kubenswrapper[4725]: I0120 11:36:52.379699 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerStarted","Data":"b6e3347cd1127e0cb9014bb89ae882927f09f07ba800282ebb6c076670a28aa0"} Jan 20 11:36:52 crc kubenswrapper[4725]: I0120 11:36:52.405851 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-phjxw" podStartSLOduration=1.428668712 podStartE2EDuration="31.40583621s" podCreationTimestamp="2026-01-20 11:36:21 +0000 UTC" firstStartedPulling="2026-01-20 11:36:21.822524179 +0000 UTC m=+1910.030846162" lastFinishedPulling="2026-01-20 11:36:51.799691687 +0000 UTC m=+1940.008013660" observedRunningTime="2026-01-20 11:36:52.403132484 +0000 UTC m=+1940.611454457" watchObservedRunningTime="2026-01-20 11:36:52.40583621 +0000 UTC m=+1940.614158183" Jan 20 11:36:53 crc kubenswrapper[4725]: I0120 11:36:53.933128 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:36:53 crc kubenswrapper[4725]: E0120 11:36:53.933717 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:37:08 crc kubenswrapper[4725]: I0120 11:37:08.072006 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:37:08 crc kubenswrapper[4725]: E0120 11:37:08.075004 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:37:14 crc kubenswrapper[4725]: I0120 11:37:14.221933 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.632889 4725 scope.go:117] "RemoveContainer" containerID="cbb40b4a35af16ef739d7936989eb2a98cbe2e9f78178e91db6ddf8b1dfef24b" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.669199 4725 scope.go:117] "RemoveContainer" containerID="697a37843b8a0440d43c4e8976463aac27a527f1025878803dd957ce26ac737d" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.706853 4725 scope.go:117] "RemoveContainer" containerID="322aa27a42fe64732b61397f3af12e6913daf6723474abf3c9c0bde2daa65c96" Jan 20 11:37:16 crc kubenswrapper[4725]: I0120 11:37:16.738595 4725 scope.go:117] "RemoveContainer" containerID="78f02562103ddffde1093928ec6242b4c8b49a6f4ce128c626fad826fff2e675" Jan 20 11:37:18 crc kubenswrapper[4725]: I0120 11:37:18.599387 4725 generic.go:334] "Generic (PLEG): container finished" podID="3e274138-1522-41f2-8021-9f425af23d2e" containerID="2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c" exitCode=1 Jan 20 11:37:18 crc kubenswrapper[4725]: I0120 11:37:18.599462 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerDied","Data":"2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c"} Jan 20 11:37:18 crc kubenswrapper[4725]: I0120 11:37:18.600625 4725 scope.go:117] "RemoveContainer" containerID="2bea22707d430289bbc9f0f0b5bc7ce6dc6208ffc712275eba298cf4827f844c" Jan 20 11:37:20 crc kubenswrapper[4725]: I0120 11:37:20.932613 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:37:20 crc kubenswrapper[4725]: E0120 11:37:20.932931 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:37:24 crc kubenswrapper[4725]: I0120 11:37:24.653776 4725 generic.go:334] "Generic (PLEG): container finished" podID="3e274138-1522-41f2-8021-9f425af23d2e" containerID="b6e3347cd1127e0cb9014bb89ae882927f09f07ba800282ebb6c076670a28aa0" exitCode=1 Jan 20 11:37:24 crc kubenswrapper[4725]: I0120 11:37:24.653884 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerDied","Data":"b6e3347cd1127e0cb9014bb89ae882927f09f07ba800282ebb6c076670a28aa0"} Jan 20 11:37:25 crc kubenswrapper[4725]: I0120 11:37:25.922278 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.101959 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102225 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102349 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102578 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102617 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102695 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.102789 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") pod \"3e274138-1522-41f2-8021-9f425af23d2e\" (UID: \"3e274138-1522-41f2-8021-9f425af23d2e\") " Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.110350 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6" (OuterVolumeSpecName: "kube-api-access-btsq6") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "kube-api-access-btsq6". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.122944 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.123007 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.124878 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.125755 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.126799 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.128376 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "3e274138-1522-41f2-8021-9f425af23d2e" (UID: "3e274138-1522-41f2-8021-9f425af23d2e"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205348 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205392 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205690 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205708 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205721 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205836 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-btsq6\" (UniqueName: \"kubernetes.io/projected/3e274138-1522-41f2-8021-9f425af23d2e-kube-api-access-btsq6\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.205847 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/3e274138-1522-41f2-8021-9f425af23d2e-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.676415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-phjxw" event={"ID":"3e274138-1522-41f2-8021-9f425af23d2e","Type":"ContainerDied","Data":"e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be"} Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.676476 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e1ce5366d197da61ce1a384858992c4ffc1d4a3f35444e39eaa00f356d1406be" Jan 20 11:37:26 crc kubenswrapper[4725]: I0120 11:37:26.676993 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-phjxw" Jan 20 11:37:31 crc kubenswrapper[4725]: I0120 11:37:31.933729 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:37:32 crc kubenswrapper[4725]: I0120 11:37:32.748552 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da"} Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.030259 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-n5jwb"] Jan 20 11:37:33 crc kubenswrapper[4725]: E0120 11:37:33.031095 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerName="curl" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031115 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerName="curl" Jan 20 11:37:33 crc kubenswrapper[4725]: E0120 11:37:33.031137 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-collectd" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031147 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-collectd" Jan 20 11:37:33 crc kubenswrapper[4725]: E0120 11:37:33.031175 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-ceilometer" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031189 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-ceilometer" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031355 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="650f5183-3a46-4da1-befe-a96b43c85a6e" containerName="curl" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031382 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-collectd" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.031401 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3e274138-1522-41f2-8021-9f425af23d2e" containerName="smoketest-ceilometer" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.032422 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.040504 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.041050 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.041610 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.041821 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.063857 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.064968 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.078810 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-n5jwb"] Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.132861 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.132949 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.132988 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133125 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133267 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.133528 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.234955 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235109 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235158 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235203 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235237 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235282 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.235325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.236443 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.237198 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.237832 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.238654 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.239236 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.239589 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.259482 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"stf-smoketest-smoke1-n5jwb\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.364410 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.637401 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-n5jwb"] Jan 20 11:37:33 crc kubenswrapper[4725]: W0120 11:37:33.641002 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod98772f19_fcd3_4ee3_91e7_aa87154c3c50.slice/crio-6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4 WatchSource:0}: Error finding container 6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4: Status 404 returned error can't find the container with id 6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4 Jan 20 11:37:33 crc kubenswrapper[4725]: I0120 11:37:33.758731 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerStarted","Data":"6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4"} Jan 20 11:37:34 crc kubenswrapper[4725]: I0120 11:37:34.771244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerStarted","Data":"4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2"} Jan 20 11:37:34 crc kubenswrapper[4725]: I0120 11:37:34.771727 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerStarted","Data":"4cf47d02e83874585e6aa2dc72086299ea42bb5eaa1e6208d969a381f36e3229"} Jan 20 11:37:34 crc kubenswrapper[4725]: I0120 11:37:34.794041 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" podStartSLOduration=1.7939993570000001 podStartE2EDuration="1.793999357s" podCreationTimestamp="2026-01-20 11:37:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:37:34.79280609 +0000 UTC m=+1983.001128053" watchObservedRunningTime="2026-01-20 11:37:34.793999357 +0000 UTC m=+1983.002321330" Jan 20 11:38:07 crc kubenswrapper[4725]: I0120 11:38:07.068647 4725 generic.go:334] "Generic (PLEG): container finished" podID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerID="4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2" exitCode=1 Jan 20 11:38:07 crc kubenswrapper[4725]: I0120 11:38:07.068746 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerDied","Data":"4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2"} Jan 20 11:38:07 crc kubenswrapper[4725]: I0120 11:38:07.070621 4725 scope.go:117] "RemoveContainer" containerID="4202eace9a0dac1a2bf22b64ca1974be4b477cc16863898bcfcdbe1b277657c2" Jan 20 11:38:08 crc kubenswrapper[4725]: I0120 11:38:08.080587 4725 generic.go:334] "Generic (PLEG): container finished" podID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerID="4cf47d02e83874585e6aa2dc72086299ea42bb5eaa1e6208d969a381f36e3229" exitCode=1 Jan 20 11:38:08 crc kubenswrapper[4725]: I0120 11:38:08.080675 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerDied","Data":"4cf47d02e83874585e6aa2dc72086299ea42bb5eaa1e6208d969a381f36e3229"} Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.346804 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.429587 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430196 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430283 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430351 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430429 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430513 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.430556 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") pod \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\" (UID: \"98772f19-fcd3-4ee3-91e7-aa87154c3c50\") " Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.437826 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w" (OuterVolumeSpecName: "kube-api-access-mz88w") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "kube-api-access-mz88w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.451906 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.452035 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.451969 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.452643 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.453299 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.455389 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "98772f19-fcd3-4ee3-91e7-aa87154c3c50" (UID: "98772f19-fcd3-4ee3-91e7-aa87154c3c50"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532323 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532372 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532384 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-mz88w\" (UniqueName: \"kubernetes.io/projected/98772f19-fcd3-4ee3-91e7-aa87154c3c50-kube-api-access-mz88w\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532393 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532404 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532414 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:09 crc kubenswrapper[4725]: I0120 11:38:09.532422 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/98772f19-fcd3-4ee3-91e7-aa87154c3c50-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:10 crc kubenswrapper[4725]: I0120 11:38:10.101608 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" event={"ID":"98772f19-fcd3-4ee3-91e7-aa87154c3c50","Type":"ContainerDied","Data":"6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4"} Jan 20 11:38:10 crc kubenswrapper[4725]: I0120 11:38:10.101682 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-n5jwb" Jan 20 11:38:10 crc kubenswrapper[4725]: I0120 11:38:10.101688 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6a4ad64525a1e3d081a93d4a2fa5d9709ba3a5d05fcbdda1ae9ff9b3298dd5c4" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.035024 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z2qv6"] Jan 20 11:38:27 crc kubenswrapper[4725]: E0120 11:38:27.036695 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-collectd" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036726 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-collectd" Jan 20 11:38:27 crc kubenswrapper[4725]: E0120 11:38:27.036740 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-ceilometer" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036746 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-ceilometer" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036907 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-ceilometer" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.036926 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="98772f19-fcd3-4ee3-91e7-aa87154c3c50" containerName="smoketest-collectd" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.037838 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.041193 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.041566 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.042660 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.043075 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.043290 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.043552 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.055378 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z2qv6"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091532 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091598 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091629 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091659 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091709 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091821 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.091927 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193224 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193325 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193363 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193393 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193436 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193469 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.193522 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.194961 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.195024 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.195967 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.196285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.196404 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.197023 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.221713 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"stf-smoketest-smoke1-z2qv6\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.382313 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.629709 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-z2qv6"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.760677 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.764469 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.773324 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.805961 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.806070 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.806173 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908092 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908205 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908305 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908805 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.908931 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:27 crc kubenswrapper[4725]: I0120 11:38:27.930145 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"redhat-operators-qgltk\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.102988 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.290005 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerStarted","Data":"cea8f28970afd85bcf9b5b2a1925c6b3a3bfaa0434a211aa929c29e6b55f4044"} Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.290520 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerStarted","Data":"75e63a488e52c52d2fda1015dcfb672de76425e3ba1b55bff85847b4bc5fcc5e"} Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.290535 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerStarted","Data":"9c0050128f3c4711577642b7280a796228af17f2d79b5330b1bbbed61094b001"} Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.329841 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" podStartSLOduration=1.329799927 podStartE2EDuration="1.329799927s" podCreationTimestamp="2026-01-20 11:38:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:38:28.322132773 +0000 UTC m=+2036.530454756" watchObservedRunningTime="2026-01-20 11:38:28.329799927 +0000 UTC m=+2036.538121900" Jan 20 11:38:28 crc kubenswrapper[4725]: I0120 11:38:28.413591 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:28 crc kubenswrapper[4725]: W0120 11:38:28.419274 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podf6692404_540c_447d_9548_777d22a10598.slice/crio-06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056 WatchSource:0}: Error finding container 06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056: Status 404 returned error can't find the container with id 06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056 Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.301378 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6692404-540c-447d-9548-777d22a10598" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" exitCode=0 Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.301487 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff"} Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.301573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerStarted","Data":"06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056"} Jan 20 11:38:29 crc kubenswrapper[4725]: I0120 11:38:29.307057 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:38:31 crc kubenswrapper[4725]: I0120 11:38:31.320747 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerStarted","Data":"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588"} Jan 20 11:38:36 crc kubenswrapper[4725]: I0120 11:38:36.323298 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6692404-540c-447d-9548-777d22a10598" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" exitCode=0 Jan 20 11:38:36 crc kubenswrapper[4725]: I0120 11:38:36.323410 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588"} Jan 20 11:38:37 crc kubenswrapper[4725]: I0120 11:38:37.335968 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerStarted","Data":"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7"} Jan 20 11:38:37 crc kubenswrapper[4725]: I0120 11:38:37.367110 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-qgltk" podStartSLOduration=2.878675688 podStartE2EDuration="10.367059968s" podCreationTimestamp="2026-01-20 11:38:27 +0000 UTC" firstStartedPulling="2026-01-20 11:38:29.306681978 +0000 UTC m=+2037.515003951" lastFinishedPulling="2026-01-20 11:38:36.795066258 +0000 UTC m=+2045.003388231" observedRunningTime="2026-01-20 11:38:37.358302112 +0000 UTC m=+2045.566624095" watchObservedRunningTime="2026-01-20 11:38:37.367059968 +0000 UTC m=+2045.575381961" Jan 20 11:38:38 crc kubenswrapper[4725]: I0120 11:38:38.103778 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:38 crc kubenswrapper[4725]: I0120 11:38:38.103977 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:39 crc kubenswrapper[4725]: I0120 11:38:39.154558 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-qgltk" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" probeResult="failure" output=< Jan 20 11:38:39 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:38:39 crc kubenswrapper[4725]: > Jan 20 11:38:48 crc kubenswrapper[4725]: I0120 11:38:48.149621 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:48 crc kubenswrapper[4725]: I0120 11:38:48.195000 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:48 crc kubenswrapper[4725]: I0120 11:38:48.386777 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.431914 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-qgltk" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" containerID="cri-o://068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" gracePeriod=2 Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.848111 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.961125 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") pod \"f6692404-540c-447d-9548-777d22a10598\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.961194 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") pod \"f6692404-540c-447d-9548-777d22a10598\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.961241 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") pod \"f6692404-540c-447d-9548-777d22a10598\" (UID: \"f6692404-540c-447d-9548-777d22a10598\") " Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.962765 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities" (OuterVolumeSpecName: "utilities") pod "f6692404-540c-447d-9548-777d22a10598" (UID: "f6692404-540c-447d-9548-777d22a10598"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:38:49 crc kubenswrapper[4725]: I0120 11:38:49.968825 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl" (OuterVolumeSpecName: "kube-api-access-j5tnl") pod "f6692404-540c-447d-9548-777d22a10598" (UID: "f6692404-540c-447d-9548-777d22a10598"). InnerVolumeSpecName "kube-api-access-j5tnl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.063914 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-j5tnl\" (UniqueName: \"kubernetes.io/projected/f6692404-540c-447d-9548-777d22a10598-kube-api-access-j5tnl\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.064317 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.101410 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "f6692404-540c-447d-9548-777d22a10598" (UID: "f6692404-540c-447d-9548-777d22a10598"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.166204 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/f6692404-540c-447d-9548-777d22a10598-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445296 4725 generic.go:334] "Generic (PLEG): container finished" podID="f6692404-540c-447d-9548-777d22a10598" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" exitCode=0 Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445366 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7"} Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445432 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-qgltk" event={"ID":"f6692404-540c-447d-9548-777d22a10598","Type":"ContainerDied","Data":"06ba4622145a6872dc1d1c19ada3feb43545a28e29b668ed3371e2d024154056"} Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.445465 4725 scope.go:117] "RemoveContainer" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.446693 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-qgltk" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.473740 4725 scope.go:117] "RemoveContainer" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.489524 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.497596 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-qgltk"] Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.514570 4725 scope.go:117] "RemoveContainer" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.543105 4725 scope.go:117] "RemoveContainer" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" Jan 20 11:38:50 crc kubenswrapper[4725]: E0120 11:38:50.544019 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7\": container with ID starting with 068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7 not found: ID does not exist" containerID="068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544097 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7"} err="failed to get container status \"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7\": rpc error: code = NotFound desc = could not find container \"068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7\": container with ID starting with 068dedac7ac977cd6c48697e36fe58c8d3d11637d6b5a469b9f06c3b389a76f7 not found: ID does not exist" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544146 4725 scope.go:117] "RemoveContainer" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" Jan 20 11:38:50 crc kubenswrapper[4725]: E0120 11:38:50.544533 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588\": container with ID starting with da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588 not found: ID does not exist" containerID="da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544559 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588"} err="failed to get container status \"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588\": rpc error: code = NotFound desc = could not find container \"da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588\": container with ID starting with da175f450c8fc96efce509190386e65b54ee12a75eaf1e0b4ea51358e3788588 not found: ID does not exist" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544578 4725 scope.go:117] "RemoveContainer" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" Jan 20 11:38:50 crc kubenswrapper[4725]: E0120 11:38:50.544889 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff\": container with ID starting with 5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff not found: ID does not exist" containerID="5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.544918 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff"} err="failed to get container status \"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff\": rpc error: code = NotFound desc = could not find container \"5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff\": container with ID starting with 5daa7fc6f97cfce0ee4c32d2847c14fe84e7e398d899b66b4196a9bd3506efff not found: ID does not exist" Jan 20 11:38:50 crc kubenswrapper[4725]: I0120 11:38:50.942772 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f6692404-540c-447d-9548-777d22a10598" path="/var/lib/kubelet/pods/f6692404-540c-447d-9548-777d22a10598/volumes" Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.564253 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerID="cea8f28970afd85bcf9b5b2a1925c6b3a3bfaa0434a211aa929c29e6b55f4044" exitCode=1 Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.565220 4725 generic.go:334] "Generic (PLEG): container finished" podID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerID="75e63a488e52c52d2fda1015dcfb672de76425e3ba1b55bff85847b4bc5fcc5e" exitCode=1 Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.564336 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerDied","Data":"cea8f28970afd85bcf9b5b2a1925c6b3a3bfaa0434a211aa929c29e6b55f4044"} Jan 20 11:39:01 crc kubenswrapper[4725]: I0120 11:39:01.565286 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerDied","Data":"75e63a488e52c52d2fda1015dcfb672de76425e3ba1b55bff85847b4bc5fcc5e"} Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.830819 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.867299 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.867362 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.867435 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868516 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868574 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868625 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.868647 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") pod \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\" (UID: \"6c81226d-b3a8-4f68-8c87-b32fe8ae7901\") " Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.875186 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb" (OuterVolumeSpecName: "kube-api-access-h62mb") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "kube-api-access-h62mb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.891011 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.891289 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.892599 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.892844 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.895751 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.896689 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "6c81226d-b3a8-4f68-8c87-b32fe8ae7901" (UID: "6c81226d-b3a8-4f68-8c87-b32fe8ae7901"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970625 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970677 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970694 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970707 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970719 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970737 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-h62mb\" (UniqueName: \"kubernetes.io/projected/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-kube-api-access-h62mb\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:02 crc kubenswrapper[4725]: I0120 11:39:02.970750 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6c81226d-b3a8-4f68-8c87-b32fe8ae7901-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:39:03 crc kubenswrapper[4725]: I0120 11:39:03.582215 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" event={"ID":"6c81226d-b3a8-4f68-8c87-b32fe8ae7901","Type":"ContainerDied","Data":"9c0050128f3c4711577642b7280a796228af17f2d79b5330b1bbbed61094b001"} Jan 20 11:39:03 crc kubenswrapper[4725]: I0120 11:39:03.582262 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-z2qv6" Jan 20 11:39:03 crc kubenswrapper[4725]: I0120 11:39:03.582285 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="9c0050128f3c4711577642b7280a796228af17f2d79b5330b1bbbed61094b001" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.036728 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7l92d"] Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038004 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-ceilometer" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038022 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-ceilometer" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038034 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-content" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038041 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-content" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038054 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-collectd" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038061 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-collectd" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038072 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-utilities" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038824 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="extract-utilities" Jan 20 11:39:41 crc kubenswrapper[4725]: E0120 11:39:41.038839 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.038846 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039013 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="f6692404-540c-447d-9548-777d22a10598" containerName="registry-server" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039032 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-collectd" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039042 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6c81226d-b3a8-4f68-8c87-b32fe8ae7901" containerName="smoketest-ceilometer" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.039997 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045896 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-entrypoint-script" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045978 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-sensubility-config" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.046068 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-entrypoint-script" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045915 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-healthcheck-log" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.045915 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-ceilometer-publisher" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.046287 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"service-telemetry"/"stf-smoketest-collectd-config" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.056511 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7l92d"] Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101340 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101472 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101543 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101610 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101655 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.101707 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.202885 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.202945 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.202980 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203001 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203029 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203056 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.203138 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.204680 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205090 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205134 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205398 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.205546 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.206272 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.234056 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"stf-smoketest-smoke1-7l92d\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.363441 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.640966 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-7l92d"] Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.939222 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerStarted","Data":"d88ad7a804a14de5ca4d9912edcde828fc8a64fa321a09e08e421555b29df5a4"} Jan 20 11:39:41 crc kubenswrapper[4725]: I0120 11:39:41.939788 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerStarted","Data":"3de7bd20454f36c3eb0175eedae39688ef0d7bed105bb1f393b399fcfe3733ca"} Jan 20 11:39:42 crc kubenswrapper[4725]: I0120 11:39:42.949955 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerStarted","Data":"eb4c08870c8528b33be6d24bdbf794786b9e04c4abca7370b6a44ed218e39cd9"} Jan 20 11:39:42 crc kubenswrapper[4725]: I0120 11:39:42.994548 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-7l92d" podStartSLOduration=1.994512262 podStartE2EDuration="1.994512262s" podCreationTimestamp="2026-01-20 11:39:41 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:39:42.986066156 +0000 UTC m=+2111.194388139" watchObservedRunningTime="2026-01-20 11:39:42.994512262 +0000 UTC m=+2111.202834245" Jan 20 11:39:56 crc kubenswrapper[4725]: I0120 11:39:56.728299 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:39:56 crc kubenswrapper[4725]: I0120 11:39:56.729222 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283122 4725 generic.go:334] "Generic (PLEG): container finished" podID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerID="eb4c08870c8528b33be6d24bdbf794786b9e04c4abca7370b6a44ed218e39cd9" exitCode=0 Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283913 4725 generic.go:334] "Generic (PLEG): container finished" podID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerID="d88ad7a804a14de5ca4d9912edcde828fc8a64fa321a09e08e421555b29df5a4" exitCode=0 Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283196 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerDied","Data":"eb4c08870c8528b33be6d24bdbf794786b9e04c4abca7370b6a44ed218e39cd9"} Jan 20 11:40:15 crc kubenswrapper[4725]: I0120 11:40:15.283967 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerDied","Data":"d88ad7a804a14de5ca4d9912edcde828fc8a64fa321a09e08e421555b29df5a4"} Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.651016 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695029 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695233 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695319 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695377 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695427 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.695463 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.696723 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") pod \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\" (UID: \"6716d3a1-d97b-4a7f-9a35-7f304cc226ad\") " Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.703146 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm" (OuterVolumeSpecName: "kube-api-access-dgnhm") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "kube-api-access-dgnhm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.717043 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.718010 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.718529 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.719662 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.720038 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.721977 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "6716d3a1-d97b-4a7f-9a35-7f304cc226ad" (UID: "6716d3a1-d97b-4a7f-9a35-7f304cc226ad"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798881 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798928 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798940 4725 reconciler_common.go:293] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798953 4725 reconciler_common.go:293] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798966 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-dgnhm\" (UniqueName: \"kubernetes.io/projected/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-kube-api-access-dgnhm\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798975 4725 reconciler_common.go:293] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:16 crc kubenswrapper[4725]: I0120 11:40:16.798982 4725 reconciler_common.go:293] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/6716d3a1-d97b-4a7f-9a35-7f304cc226ad-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 20 11:40:17 crc kubenswrapper[4725]: I0120 11:40:17.369950 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-7l92d" event={"ID":"6716d3a1-d97b-4a7f-9a35-7f304cc226ad","Type":"ContainerDied","Data":"3de7bd20454f36c3eb0175eedae39688ef0d7bed105bb1f393b399fcfe3733ca"} Jan 20 11:40:17 crc kubenswrapper[4725]: I0120 11:40:17.370028 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="3de7bd20454f36c3eb0175eedae39688ef0d7bed105bb1f393b399fcfe3733ca" Jan 20 11:40:17 crc kubenswrapper[4725]: I0120 11:40:17.370098 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-7l92d" Jan 20 11:40:18 crc kubenswrapper[4725]: I0120 11:40:18.360995 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-collectd/0.log" Jan 20 11:40:18 crc kubenswrapper[4725]: I0120 11:40:18.636503 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-ceilometer/0.log" Jan 20 11:40:18 crc kubenswrapper[4725]: I0120 11:40:18.902254 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-mqfr7_5b2eb85b-dd29-4dc6-9d02-1087e7119ae7/default-interconnect/0.log" Jan 20 11:40:19 crc kubenswrapper[4725]: I0120 11:40:19.212778 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/bridge/2.log" Jan 20 11:40:19 crc kubenswrapper[4725]: I0120 11:40:19.473223 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/sg-core/0.log" Jan 20 11:40:19 crc kubenswrapper[4725]: I0120 11:40:19.791992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/bridge/2.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.101772 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/sg-core/0.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.383038 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/bridge/2.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.678133 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/sg-core/0.log" Jan 20 11:40:20 crc kubenswrapper[4725]: I0120 11:40:20.963680 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/bridge/2.log" Jan 20 11:40:21 crc kubenswrapper[4725]: I0120 11:40:21.263522 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/sg-core/0.log" Jan 20 11:40:21 crc kubenswrapper[4725]: I0120 11:40:21.534643 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/bridge/2.log" Jan 20 11:40:21 crc kubenswrapper[4725]: I0120 11:40:21.771413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/sg-core/0.log" Jan 20 11:40:25 crc kubenswrapper[4725]: I0120 11:40:25.395699 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d4f8cb59-xtrqk_288c5de6-7288-478c-b790-1f348c4827f4/operator/0.log" Jan 20 11:40:25 crc kubenswrapper[4725]: I0120 11:40:25.671135 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/prometheus/0.log" Jan 20 11:40:25 crc kubenswrapper[4725]: I0120 11:40:25.951302 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elasticsearch/0.log" Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.202916 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.518756 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/alertmanager/0.log" Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.728283 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:40:26 crc kubenswrapper[4725]: I0120 11:40:26.728369 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:40:44 crc kubenswrapper[4725]: I0120 11:40:44.728244 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-9d4584887-5t9dx_653691a1-9088-47bd-97e2-4d2f17f885bf/operator/0.log" Jan 20 11:40:48 crc kubenswrapper[4725]: I0120 11:40:48.236684 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d4f8cb59-xtrqk_288c5de6-7288-478c-b790-1f348c4827f4/operator/0.log" Jan 20 11:40:48 crc kubenswrapper[4725]: I0120 11:40:48.545067 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_879163eb-1e0f-4030-aec9-69331c2e5ecd/qdr/0.log" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.727992 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.728900 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.728963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.729874 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:40:56 crc kubenswrapper[4725]: I0120 11:40:56.729946 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da" gracePeriod=600 Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.759156 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da" exitCode=0 Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.759226 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da"} Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.760110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16"} Jan 20 11:40:57 crc kubenswrapper[4725]: I0120 11:40:57.760144 4725 scope.go:117] "RemoveContainer" containerID="fe48825aafe9faa1e47155d728d642228edf9340f6d28bfa1dd850e2aa6e056f" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.122309 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-must-gather-vh25r/must-gather-k86g8"] Jan 20 11:41:13 crc kubenswrapper[4725]: E0120 11:41:13.125064 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-ceilometer" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125190 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-ceilometer" Jan 20 11:41:13 crc kubenswrapper[4725]: E0120 11:41:13.125280 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-collectd" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125351 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-collectd" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125575 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-collectd" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.125651 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="6716d3a1-d97b-4a7f-9a35-7f304cc226ad" containerName="smoketest-ceilometer" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.126659 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.129967 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vh25r"/"openshift-service-ca.crt" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.131354 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-must-gather-vh25r"/"kube-root-ca.crt" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.141522 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vh25r/must-gather-k86g8"] Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.287184 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mrg7c\" (UniqueName: \"kubernetes.io/projected/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-kube-api-access-mrg7c\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.287818 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-must-gather-output\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.390012 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-mrg7c\" (UniqueName: \"kubernetes.io/projected/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-kube-api-access-mrg7c\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.390111 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-must-gather-output\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.390667 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-must-gather-output\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.431152 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-mrg7c\" (UniqueName: \"kubernetes.io/projected/44435c2f-00ef-4c8f-88f3-ff2e79476ff1-kube-api-access-mrg7c\") pod \"must-gather-k86g8\" (UID: \"44435c2f-00ef-4c8f-88f3-ff2e79476ff1\") " pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.449035 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-vh25r/must-gather-k86g8" Jan 20 11:41:13 crc kubenswrapper[4725]: I0120 11:41:13.928841 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-vh25r/must-gather-k86g8"] Jan 20 11:41:13 crc kubenswrapper[4725]: W0120 11:41:13.935024 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod44435c2f_00ef_4c8f_88f3_ff2e79476ff1.slice/crio-e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22 WatchSource:0}: Error finding container e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22: Status 404 returned error can't find the container with id e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22 Jan 20 11:41:14 crc kubenswrapper[4725]: I0120 11:41:14.919440 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vh25r/must-gather-k86g8" event={"ID":"44435c2f-00ef-4c8f-88f3-ff2e79476ff1","Type":"ContainerStarted","Data":"e5f2af1c1b888f2e401b88b1f21fefa15f8c134ecf12e5b0d237cdc0f79b8f22"} Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.595264 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.597981 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.606223 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.691870 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.691948 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.692001 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.794126 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.794225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.794296 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.795580 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.795923 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.821568 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"certified-operators-pf4tp\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:19 crc kubenswrapper[4725]: I0120 11:41:19.941288 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.387447 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.391450 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.394558 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.418739 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"infrawatch-operators-jkmc4\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.520905 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"infrawatch-operators-jkmc4\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.595662 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"infrawatch-operators-jkmc4\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:21 crc kubenswrapper[4725]: I0120 11:41:21.728241 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:23 crc kubenswrapper[4725]: I0120 11:41:22.999510 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:23 crc kubenswrapper[4725]: I0120 11:41:23.015477 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vh25r/must-gather-k86g8" event={"ID":"44435c2f-00ef-4c8f-88f3-ff2e79476ff1","Type":"ContainerStarted","Data":"8a41cc5c48cfc65ba6796c1be7ac535542f3f32d228f563264b307dc10ebe1c3"} Jan 20 11:41:23 crc kubenswrapper[4725]: I0120 11:41:23.049794 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:23 crc kubenswrapper[4725]: W0120 11:41:23.057331 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod92a63a6c_5e81_4cb6_8c56_ee0673d781fa.slice/crio-2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358 WatchSource:0}: Error finding container 2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358: Status 404 returned error can't find the container with id 2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358 Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.027708 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-vh25r/must-gather-k86g8" event={"ID":"44435c2f-00ef-4c8f-88f3-ff2e79476ff1","Type":"ContainerStarted","Data":"0d57c69c9dd782acdf37233eaaaa9cc500fb981a71b9acbd961994f813858120"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.030015 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerStarted","Data":"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.030107 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerStarted","Data":"2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.032466 4725 generic.go:334] "Generic (PLEG): container finished" podID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" exitCode=0 Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.032515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.032550 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerStarted","Data":"62fa7c0461e72580f67e750e5a76c501b04839f3e34ae343fd69306f9db9dd66"} Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.059873 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-vh25r/must-gather-k86g8" podStartSLOduration=2.685537227 podStartE2EDuration="11.059839031s" podCreationTimestamp="2026-01-20 11:41:13 +0000 UTC" firstStartedPulling="2026-01-20 11:41:13.937625766 +0000 UTC m=+2202.145947739" lastFinishedPulling="2026-01-20 11:41:22.31192757 +0000 UTC m=+2210.520249543" observedRunningTime="2026-01-20 11:41:24.052030845 +0000 UTC m=+2212.260352848" watchObservedRunningTime="2026-01-20 11:41:24.059839031 +0000 UTC m=+2212.268161004" Jan 20 11:41:24 crc kubenswrapper[4725]: I0120 11:41:24.097558 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-jkmc4" podStartSLOduration=2.9462605269999997 podStartE2EDuration="3.097531511s" podCreationTimestamp="2026-01-20 11:41:21 +0000 UTC" firstStartedPulling="2026-01-20 11:41:23.063588161 +0000 UTC m=+2211.271910134" lastFinishedPulling="2026-01-20 11:41:23.214859145 +0000 UTC m=+2211.423181118" observedRunningTime="2026-01-20 11:41:24.089894469 +0000 UTC m=+2212.298216442" watchObservedRunningTime="2026-01-20 11:41:24.097531511 +0000 UTC m=+2212.305853474" Jan 20 11:41:26 crc kubenswrapper[4725]: I0120 11:41:26.054890 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerStarted","Data":"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877"} Jan 20 11:41:27 crc kubenswrapper[4725]: I0120 11:41:27.085025 4725 generic.go:334] "Generic (PLEG): container finished" podID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" exitCode=0 Jan 20 11:41:27 crc kubenswrapper[4725]: I0120 11:41:27.085454 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877"} Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.111527 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerStarted","Data":"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f"} Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.140976 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-pf4tp" podStartSLOduration=6.235501384 podStartE2EDuration="10.140948985s" podCreationTimestamp="2026-01-20 11:41:19 +0000 UTC" firstStartedPulling="2026-01-20 11:41:24.034294615 +0000 UTC m=+2212.242616588" lastFinishedPulling="2026-01-20 11:41:27.939742216 +0000 UTC m=+2216.148064189" observedRunningTime="2026-01-20 11:41:29.136907578 +0000 UTC m=+2217.345229551" watchObservedRunningTime="2026-01-20 11:41:29.140948985 +0000 UTC m=+2217.349270968" Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.941898 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:29 crc kubenswrapper[4725]: I0120 11:41:29.941968 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:30 crc kubenswrapper[4725]: I0120 11:41:30.991894 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-pf4tp" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" probeResult="failure" output=< Jan 20 11:41:30 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:41:30 crc kubenswrapper[4725]: > Jan 20 11:41:31 crc kubenswrapper[4725]: I0120 11:41:31.728702 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:31 crc kubenswrapper[4725]: I0120 11:41:31.728779 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:31 crc kubenswrapper[4725]: I0120 11:41:31.765805 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:32 crc kubenswrapper[4725]: I0120 11:41:32.166695 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.341430 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.343633 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-jkmc4" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" containerID="cri-o://0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" gracePeriod=2 Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.765983 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.826603 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") pod \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\" (UID: \"92a63a6c-5e81-4cb6-8c56-ee0673d781fa\") " Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.835063 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5" (OuterVolumeSpecName: "kube-api-access-vb4p5") pod "92a63a6c-5e81-4cb6-8c56-ee0673d781fa" (UID: "92a63a6c-5e81-4cb6-8c56-ee0673d781fa"). InnerVolumeSpecName "kube-api-access-vb4p5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:41:35 crc kubenswrapper[4725]: I0120 11:41:35.928332 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vb4p5\" (UniqueName: \"kubernetes.io/projected/92a63a6c-5e81-4cb6-8c56-ee0673d781fa-kube-api-access-vb4p5\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337000 4725 generic.go:334] "Generic (PLEG): container finished" podID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" exitCode=0 Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337067 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-jkmc4" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337061 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerDied","Data":"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30"} Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337137 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-jkmc4" event={"ID":"92a63a6c-5e81-4cb6-8c56-ee0673d781fa","Type":"ContainerDied","Data":"2d85c1bbc386137a17d093347dadb65cf240291c90174ff10995d5959be3d358"} Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.337167 4725 scope.go:117] "RemoveContainer" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.364162 4725 scope.go:117] "RemoveContainer" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" Jan 20 11:41:36 crc kubenswrapper[4725]: E0120 11:41:36.364961 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30\": container with ID starting with 0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30 not found: ID does not exist" containerID="0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.365002 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30"} err="failed to get container status \"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30\": rpc error: code = NotFound desc = could not find container \"0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30\": container with ID starting with 0380d57447b0c0fe229e2df297afe5e1255e5dd1be08215dc551ed228cc51d30 not found: ID does not exist" Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.376355 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.383209 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-jkmc4"] Jan 20 11:41:36 crc kubenswrapper[4725]: I0120 11:41:36.941776 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" path="/var/lib/kubelet/pods/92a63a6c-5e81-4cb6-8c56-ee0673d781fa/volumes" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.161413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sh5db_b07c5d50-bb91-412d-b86a-3d736a16a81d/control-plane-machine-set-operator/0.log" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.183486 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/kube-rbac-proxy/0.log" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.193549 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/machine-api-operator/0.log" Jan 20 11:41:39 crc kubenswrapper[4725]: I0120 11:41:39.993150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:40 crc kubenswrapper[4725]: I0120 11:41:40.044671 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.362584 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.363362 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-pf4tp" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" containerID="cri-o://fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" gracePeriod=2 Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.773276 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.864271 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") pod \"e27af684-a552-4b4d-ab63-82b662b0dad7\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.864416 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") pod \"e27af684-a552-4b4d-ab63-82b662b0dad7\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.864453 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") pod \"e27af684-a552-4b4d-ab63-82b662b0dad7\" (UID: \"e27af684-a552-4b4d-ab63-82b662b0dad7\") " Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.866291 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities" (OuterVolumeSpecName: "utilities") pod "e27af684-a552-4b4d-ab63-82b662b0dad7" (UID: "e27af684-a552-4b4d-ab63-82b662b0dad7"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.873237 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t" (OuterVolumeSpecName: "kube-api-access-k599t") pod "e27af684-a552-4b4d-ab63-82b662b0dad7" (UID: "e27af684-a552-4b4d-ab63-82b662b0dad7"). InnerVolumeSpecName "kube-api-access-k599t". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.922022 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e27af684-a552-4b4d-ab63-82b662b0dad7" (UID: "e27af684-a552-4b4d-ab63-82b662b0dad7"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.966288 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-k599t\" (UniqueName: \"kubernetes.io/projected/e27af684-a552-4b4d-ab63-82b662b0dad7-kube-api-access-k599t\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.966656 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:42 crc kubenswrapper[4725]: I0120 11:41:42.966758 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e27af684-a552-4b4d-ab63-82b662b0dad7-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403435 4725 generic.go:334] "Generic (PLEG): container finished" podID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" exitCode=0 Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403508 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f"} Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403559 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-pf4tp" event={"ID":"e27af684-a552-4b4d-ab63-82b662b0dad7","Type":"ContainerDied","Data":"62fa7c0461e72580f67e750e5a76c501b04839f3e34ae343fd69306f9db9dd66"} Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403582 4725 scope.go:117] "RemoveContainer" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.403577 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-pf4tp" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.434229 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.443336 4725 scope.go:117] "RemoveContainer" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.448438 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-pf4tp"] Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.467538 4725 scope.go:117] "RemoveContainer" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.500448 4725 scope.go:117] "RemoveContainer" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" Jan 20 11:41:43 crc kubenswrapper[4725]: E0120 11:41:43.501620 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f\": container with ID starting with fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f not found: ID does not exist" containerID="fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.501699 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f"} err="failed to get container status \"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f\": rpc error: code = NotFound desc = could not find container \"fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f\": container with ID starting with fc40305bef50353ae620b2afda85362341cabc59e022ee456f3f5c84ff80649f not found: ID does not exist" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.501748 4725 scope.go:117] "RemoveContainer" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" Jan 20 11:41:43 crc kubenswrapper[4725]: E0120 11:41:43.502312 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877\": container with ID starting with 1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877 not found: ID does not exist" containerID="1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.502360 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877"} err="failed to get container status \"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877\": rpc error: code = NotFound desc = could not find container \"1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877\": container with ID starting with 1292d932a1ba78ba79ce89e3eede5fdb84982f46def0c6932e935c1844c8d877 not found: ID does not exist" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.502385 4725 scope.go:117] "RemoveContainer" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" Jan 20 11:41:43 crc kubenswrapper[4725]: E0120 11:41:43.502802 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074\": container with ID starting with 7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074 not found: ID does not exist" containerID="7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074" Jan 20 11:41:43 crc kubenswrapper[4725]: I0120 11:41:43.502828 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074"} err="failed to get container status \"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074\": rpc error: code = NotFound desc = could not find container \"7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074\": container with ID starting with 7b08d6289c1616dbde45c672cda424f7fad6ac36308604fafdb179ba44dc1074 not found: ID does not exist" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.646485 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8pwdf_f31ab59c-7288-4ebb-82b4-daa77ec5319c/cert-manager-controller/0.log" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.665089 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2m9v2_62554d79-c9bb-4b40-9153-989791392664/cert-manager-cainjector/0.log" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.680951 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bxlks_8b639e20-8ca7-4b37-8271-ada2858140b9/cert-manager-webhook/0.log" Jan 20 11:41:44 crc kubenswrapper[4725]: I0120 11:41:44.942678 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" path="/var/lib/kubelet/pods/e27af684-a552-4b4d-ab63-82b662b0dad7/volumes" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.266927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sl5rg_0bc9f0db-ee2d-43d3-8fc7-66f2b155c710/prometheus-operator/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.279945 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_05acb89f-79ef-4e5a-8713-af3abbf86d5a/prometheus-operator-admission-webhook/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.299182 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_a5d78053-6a08-448a-93ca-1c0e2334617a/prometheus-operator-admission-webhook/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.322683 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cjnzp_ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002/operator/0.log" Jan 20 11:41:50 crc kubenswrapper[4725]: I0120 11:41:50.336844 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckz5m_5a2dcc7a-6d62-412d-a25f-fea592c85bf5/perses-operator/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.822992 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd_10d53364-23ca-4726-bed9-460fb6763fa1/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.833897 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd_10d53364-23ca-4726-bed9-460fb6763fa1/util/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.881505 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931aqmsnd_10d53364-23ca-4726-bed9-460fb6763fa1/pull/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.893428 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk_484dd827-7fd5-4cbc-878f-400b31b6179c/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.904718 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk_484dd827-7fd5-4cbc-878f-400b31b6179c/util/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.921481 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fhqstk_484dd827-7fd5-4cbc-878f-400b31b6179c/pull/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.935449 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms_ea19653a-0b47-400b-bcce-8034cb7f6d55/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.946660 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms_ea19653a-0b47-400b-bcce-8034cb7f6d55/util/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.957276 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5enw4ms_ea19653a-0b47-400b-bcce-8034cb7f6d55/pull/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.981300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm_418d6042-ac1e-433e-a820-04d774775787/extract/0.log" Jan 20 11:41:55 crc kubenswrapper[4725]: I0120 11:41:55.991240 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm_418d6042-ac1e-433e-a820-04d774775787/util/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.003159 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f08mjkhm_418d6042-ac1e-433e-a820-04d774775787/pull/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.475214 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6dzml_e1530fd1-1850-4d4f-b6a7-cc1784d9c399/registry-server/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.482237 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6dzml_e1530fd1-1850-4d4f-b6a7-cc1784d9c399/extract-utilities/0.log" Jan 20 11:41:56 crc kubenswrapper[4725]: I0120 11:41:56.497498 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-6dzml_e1530fd1-1850-4d4f-b6a7-cc1784d9c399/extract-content/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.150366 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hm4k5_da38c2a2-fb87-4115-ac25-0256bee850ae/registry-server/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.157894 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hm4k5_da38c2a2-fb87-4115-ac25-0256bee850ae/extract-utilities/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.168710 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-hm4k5_da38c2a2-fb87-4115-ac25-0256bee850ae/extract-content/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.191414 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-79b997595-htj9r_5666b0dd-5364-4bee-a091-26fa796770cf/marketplace-operator/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.585057 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hht7w_2c4020a9-4953-4dee-8bc0-2329493c8b8a/registry-server/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.590715 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hht7w_2c4020a9-4953-4dee-8bc0-2329493c8b8a/extract-utilities/0.log" Jan 20 11:41:57 crc kubenswrapper[4725]: I0120 11:41:57.600038 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-hht7w_2c4020a9-4953-4dee-8bc0-2329493c8b8a/extract-content/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.432589 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sl5rg_0bc9f0db-ee2d-43d3-8fc7-66f2b155c710/prometheus-operator/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.449440 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_05acb89f-79ef-4e5a-8713-af3abbf86d5a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.464474 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_a5d78053-6a08-448a-93ca-1c0e2334617a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.483209 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cjnzp_ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002/operator/0.log" Jan 20 11:42:01 crc kubenswrapper[4725]: I0120 11:42:01.508999 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckz5m_5a2dcc7a-6d62-412d-a25f-fea592c85bf5/perses-operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.781149 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-68bc856cb9-sl5rg_0bc9f0db-ee2d-43d3-8fc7-66f2b155c710/prometheus-operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.798557 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-lh85b_05acb89f-79ef-4e5a-8713-af3abbf86d5a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.815928 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-c88c9f498-xjlm5_a5d78053-6a08-448a-93ca-1c0e2334617a/prometheus-operator-admission-webhook/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.837412 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-59bdc8b94-cjnzp_ed6da84f-d4dc-4469-bbf9-2a4ac3e3e002/operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.855779 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-5bf474d74f-ckz5m_5a2dcc7a-6d62-412d-a25f-fea592c85bf5/perses-operator/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.971900 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8pwdf_f31ab59c-7288-4ebb-82b4-daa77ec5319c/cert-manager-controller/0.log" Jan 20 11:42:10 crc kubenswrapper[4725]: I0120 11:42:10.984872 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2m9v2_62554d79-c9bb-4b40-9153-989791392664/cert-manager-cainjector/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.002177 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bxlks_8b639e20-8ca7-4b37-8271-ada2858140b9/cert-manager-webhook/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.521989 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-86cb77c54b-8pwdf_f31ab59c-7288-4ebb-82b4-daa77ec5319c/cert-manager-controller/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.536520 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-855d9ccff4-2m9v2_62554d79-c9bb-4b40-9153-989791392664/cert-manager-cainjector/0.log" Jan 20 11:42:11 crc kubenswrapper[4725]: I0120 11:42:11.549438 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-f4fb5df64-bxlks_8b639e20-8ca7-4b37-8271-ada2858140b9/cert-manager-webhook/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.052927 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-78cbb6b69f-sh5db_b07c5d50-bb91-412d-b86a-3d736a16a81d/control-plane-machine-set-operator/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.069530 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/kube-rbac-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.078990 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-5694c8668f-hhz9f_808fb947-228d-42c4-ba11-480348f80d8a/machine-api-operator/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.668823 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75_34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83/extract/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.679359 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75_34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83/util/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.688498 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09tzk75_34d9f6e3-822c-4b9e-a9f1-4f5fa7a8ce83/pull/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.701433 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4_6c49be43-a86b-4475-8bd3-a1105dd19ad1/extract/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.708762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4_6c49be43-a86b-4475-8bd3-a1105dd19ad1/util/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.717686 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a4kbk4_6c49be43-a86b-4475-8bd3-a1105dd19ad1/pull/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.733840 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/alertmanager/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.741930 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/config-reloader/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.748754 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.757123 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_f490a619-9c48-49a0-857b-904084871923/init-config-reloader/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.771445 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_650f5183-3a46-4da1-befe-a96b43c85a6e/curl/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.782137 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.782799 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.789446 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-647666d9c8-x4p2q_f84a2726-80cb-4393-84ca-d901b4ee446c/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.803819 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.811058 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.811118 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.815911 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-57948895dc-2zm6p_6b74ea17-71c5-47e0-a15e-e963223f11f0/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.826872 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.827023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.832300 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-ff457bf89-458zm_739b7c2c-b11b-4260-a184-7dd184677dad/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.845629 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.854547 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.854829 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.860427 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7cd87f9766-7b54g_10b6bc99-b2ce-4952-a481-bbabe3a3fc16/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.871452 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/oauth-proxy/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.880270 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/bridge/1.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.880345 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/bridge/2.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.886921 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-5759b4d97-6lcp7_14922311-0e93-4bf9-8980-72baefd93497/sg-core/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.906821 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-68864d46cb-mqfr7_5b2eb85b-dd29-4dc6-9d02-1087e7119ae7/default-interconnect/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.917988 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-6856cfb745-fxcvg_c22fff0f-fa8e-40e0-a8dc-a138398b06e7/prometheus-webhook-snmp/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.949759 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elastic-operator-6886c99b94-tzbc7_ce11e344-b219-4b22-b05b-a21b78fc7d98/manager/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.972115 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elasticsearch/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.981132 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elastic-internal-init-filesystem/0.log" Jan 20 11:42:12 crc kubenswrapper[4725]: I0120 11:42:12.987600 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_f12e47b3-54a1-4f6b-8e7a-0dc9f25358f6/elastic-internal-suspend/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.001859 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_infrawatch-operators-4fmg5_514d6114-a2ee-4a88-9798-9a27066ed03a/registry-server/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.015674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_interconnect-operator-5bb49f789d-7p9dr_a923dc59-d518-4ee4-a92c-1bb5ad6e7158/interconnect-operator/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.035104 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/prometheus/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.041677 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/config-reloader/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.050230 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/oauth-proxy/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.058739 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_7d31d6ca-dd83-489d-9956-abb0947df80d/init-config-reloader/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.103551 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.110727 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.121488 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_851c53a0-c674-49b2-88dc-77da0a70406b/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.136912 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_879163eb-1e0f-4030-aec9-69331c2e5ecd/qdr/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.151733 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.170354 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.180855 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_184194a7-f32c-4db2-a055-5a776484cda8/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.245623 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.254937 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.267844 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_60ad3a7d-367d-4604-a9d4-c6e3baf344ac/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.531424 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-9d4584887-5t9dx_653691a1-9088-47bd-97e2-4d2f17f885bf/operator/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.549255 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.556034 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.566672 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_814e040b-c073-451b-80c4-2e90cb554a6b/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.634394 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.646474 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.659158 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_5ae4e99b-9f17-48fe-a9d1-1c4cf4a14286/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.726392 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.732835 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.740881 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_c6289b31-17e1-4470-b65b-20f1454c9faf/manage-dockerfile/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.798995 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/docker-build/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.806936 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/git-clone/0.log" Jan 20 11:42:13 crc kubenswrapper[4725]: I0120 11:42:13.816693 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_de1388d1-c8a5-4fe4-988b-ee6dcd6b67d7/manage-dockerfile/0.log" Jan 20 11:42:16 crc kubenswrapper[4725]: I0120 11:42:16.995836 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-86d4f8cb59-xtrqk_288c5de6-7288-478c-b790-1f348c4827f4/operator/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.013694 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/docker-build/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.024629 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/git-clone/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.031538 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_b276c041-188f-4dd1-a7b4-0d0ba6531174/manage-dockerfile/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.053722 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.060430 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-7l92d_6716d3a1-d97b-4a7f-9a35-7f304cc226ad/smoketest-ceilometer/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.082017 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-n5jwb_98772f19-fcd3-4ee3-91e7-aa87154c3c50/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.088762 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-n5jwb_98772f19-fcd3-4ee3-91e7-aa87154c3c50/smoketest-ceilometer/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.108432 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-phjxw_3e274138-1522-41f2-8021-9f425af23d2e/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.117018 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-phjxw_3e274138-1522-41f2-8021-9f425af23d2e/smoketest-ceilometer/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.136667 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-z2qv6_6c81226d-b3a8-4f68-8c87-b32fe8ae7901/smoketest-collectd/0.log" Jan 20 11:42:17 crc kubenswrapper[4725]: I0120 11:42:17.144413 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-z2qv6_6c81226d-b3a8-4f68-8c87-b32fe8ae7901/smoketest-ceilometer/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.636889 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/kube-multus-additional-cni-plugins/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.648310 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/egress-router-binary-copy/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.656641 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/cni-plugins/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.668452 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/bond-cni-plugin/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.677810 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/routeoverride-cni/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.687232 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/whereabouts-cni-bincopy/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.700280 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-additional-cni-plugins-z7f69_4f174e46-7c9b-4d47-9cd6-47a7f9bbe6d0/whereabouts-cni/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.716674 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-7j2sn_eca1f8da-59f2-404e-a5e0-dbe1a191b885/multus-admission-controller/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.730829 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-admission-controller-857f4d67dd-7j2sn_eca1f8da-59f2-404e-a5e0-dbe1a191b885/kube-rbac-proxy/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.773626 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/3.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.783793 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-vchwb_627f7c97-4173-413f-a90e-e2c5e058c53b/kube-multus/2.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.810789 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5lfc4_a5d55efc-e85a-4a02-a4ce-7355df9fea66/network-metrics-daemon/0.log" Jan 20 11:42:18 crc kubenswrapper[4725]: I0120 11:42:18.817023 4725 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_network-metrics-daemon-5lfc4_a5d55efc-e85a-4a02-a4ce-7355df9fea66/kube-rbac-proxy/0.log" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.625194 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626449 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626470 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626491 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626500 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626516 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-utilities" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626524 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-utilities" Jan 20 11:42:44 crc kubenswrapper[4725]: E0120 11:42:44.626545 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-content" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626551 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="extract-content" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626700 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="e27af684-a552-4b4d-ab63-82b662b0dad7" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.626718 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="92a63a6c-5e81-4cb6-8c56-ee0673d781fa" containerName="registry-server" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.627991 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.649148 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.670810 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.670883 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.671162 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.772906 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.773017 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.773173 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.774302 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.774499 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.804457 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"community-operators-ksjd9\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:44 crc kubenswrapper[4725]: I0120 11:42:44.947419 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:45 crc kubenswrapper[4725]: I0120 11:42:45.212744 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:46 crc kubenswrapper[4725]: I0120 11:42:46.028274 4725 generic.go:334] "Generic (PLEG): container finished" podID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" exitCode=0 Jan 20 11:42:46 crc kubenswrapper[4725]: I0120 11:42:46.028333 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0"} Jan 20 11:42:46 crc kubenswrapper[4725]: I0120 11:42:46.028691 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerStarted","Data":"08307e5000e1200390aae9cca49768e312d87b34a14dc48a1fcbef24ca6e7152"} Jan 20 11:42:48 crc kubenswrapper[4725]: I0120 11:42:48.057890 4725 generic.go:334] "Generic (PLEG): container finished" podID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" exitCode=0 Jan 20 11:42:48 crc kubenswrapper[4725]: I0120 11:42:48.057999 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc"} Jan 20 11:42:49 crc kubenswrapper[4725]: I0120 11:42:49.071580 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerStarted","Data":"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28"} Jan 20 11:42:49 crc kubenswrapper[4725]: I0120 11:42:49.098275 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-ksjd9" podStartSLOduration=2.421850446 podStartE2EDuration="5.098204919s" podCreationTimestamp="2026-01-20 11:42:44 +0000 UTC" firstStartedPulling="2026-01-20 11:42:46.032057814 +0000 UTC m=+2294.240379787" lastFinishedPulling="2026-01-20 11:42:48.708412287 +0000 UTC m=+2296.916734260" observedRunningTime="2026-01-20 11:42:49.092034264 +0000 UTC m=+2297.300356257" watchObservedRunningTime="2026-01-20 11:42:49.098204919 +0000 UTC m=+2297.306526892" Jan 20 11:42:54 crc kubenswrapper[4725]: I0120 11:42:54.947643 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:54 crc kubenswrapper[4725]: I0120 11:42:54.948771 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:55 crc kubenswrapper[4725]: I0120 11:42:55.008266 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:55 crc kubenswrapper[4725]: I0120 11:42:55.175466 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:55 crc kubenswrapper[4725]: I0120 11:42:55.253065 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.144156 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-ksjd9" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" containerID="cri-o://3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" gracePeriod=2 Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.610713 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.646527 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") pod \"42b144f9-6444-48d2-8e34-ee4ab42f3221\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.646650 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") pod \"42b144f9-6444-48d2-8e34-ee4ab42f3221\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.646711 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") pod \"42b144f9-6444-48d2-8e34-ee4ab42f3221\" (UID: \"42b144f9-6444-48d2-8e34-ee4ab42f3221\") " Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.648426 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities" (OuterVolumeSpecName: "utilities") pod "42b144f9-6444-48d2-8e34-ee4ab42f3221" (UID: "42b144f9-6444-48d2-8e34-ee4ab42f3221"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.655418 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb" (OuterVolumeSpecName: "kube-api-access-n2xbb") pod "42b144f9-6444-48d2-8e34-ee4ab42f3221" (UID: "42b144f9-6444-48d2-8e34-ee4ab42f3221"). InnerVolumeSpecName "kube-api-access-n2xbb". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.703745 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "42b144f9-6444-48d2-8e34-ee4ab42f3221" (UID: "42b144f9-6444-48d2-8e34-ee4ab42f3221"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.748281 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-n2xbb\" (UniqueName: \"kubernetes.io/projected/42b144f9-6444-48d2-8e34-ee4ab42f3221-kube-api-access-n2xbb\") on node \"crc\" DevicePath \"\"" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.748331 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:42:57 crc kubenswrapper[4725]: I0120 11:42:57.748345 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/42b144f9-6444-48d2-8e34-ee4ab42f3221-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.250550 4725 generic.go:334] "Generic (PLEG): container finished" podID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" exitCode=0 Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.251465 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28"} Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.251627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-ksjd9" event={"ID":"42b144f9-6444-48d2-8e34-ee4ab42f3221","Type":"ContainerDied","Data":"08307e5000e1200390aae9cca49768e312d87b34a14dc48a1fcbef24ca6e7152"} Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.251731 4725 scope.go:117] "RemoveContainer" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.252146 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-ksjd9" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.274292 4725 scope.go:117] "RemoveContainer" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.299748 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.308209 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-ksjd9"] Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.311580 4725 scope.go:117] "RemoveContainer" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.332171 4725 scope.go:117] "RemoveContainer" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" Jan 20 11:42:58 crc kubenswrapper[4725]: E0120 11:42:58.333006 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28\": container with ID starting with 3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28 not found: ID does not exist" containerID="3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.333126 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28"} err="failed to get container status \"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28\": rpc error: code = NotFound desc = could not find container \"3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28\": container with ID starting with 3425aa642a811a9b6d945d1cc4603c48bcab10d4e6da2b2362c9d6b3fd135a28 not found: ID does not exist" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.333189 4725 scope.go:117] "RemoveContainer" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" Jan 20 11:42:58 crc kubenswrapper[4725]: E0120 11:42:58.334242 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc\": container with ID starting with ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc not found: ID does not exist" containerID="ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.334272 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc"} err="failed to get container status \"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc\": rpc error: code = NotFound desc = could not find container \"ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc\": container with ID starting with ed6ffddf4033b8a3333ca4e0f9e5d37e97f3e5013451c4a8f770b92f672bc6cc not found: ID does not exist" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.334299 4725 scope.go:117] "RemoveContainer" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" Jan 20 11:42:58 crc kubenswrapper[4725]: E0120 11:42:58.334596 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0\": container with ID starting with a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0 not found: ID does not exist" containerID="a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.334623 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0"} err="failed to get container status \"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0\": rpc error: code = NotFound desc = could not find container \"a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0\": container with ID starting with a2d65d3d7a2ad8f89ffcfebc63ff2d80d6211a40aa1da43a3766f50389bd0ac0 not found: ID does not exist" Jan 20 11:42:58 crc kubenswrapper[4725]: I0120 11:42:58.950381 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" path="/var/lib/kubelet/pods/42b144f9-6444-48d2-8e34-ee4ab42f3221/volumes" Jan 20 11:43:26 crc kubenswrapper[4725]: I0120 11:43:26.728229 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:43:26 crc kubenswrapper[4725]: I0120 11:43:26.730306 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:43:56 crc kubenswrapper[4725]: I0120 11:43:56.727950 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:43:56 crc kubenswrapper[4725]: I0120 11:43:56.728968 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.727529 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.728442 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.728516 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.729492 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:44:26 crc kubenswrapper[4725]: I0120 11:44:26.729562 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" gracePeriod=600 Jan 20 11:44:27 crc kubenswrapper[4725]: I0120 11:44:27.084914 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" exitCode=0 Jan 20 11:44:27 crc kubenswrapper[4725]: I0120 11:44:27.084993 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16"} Jan 20 11:44:27 crc kubenswrapper[4725]: I0120 11:44:27.085554 4725 scope.go:117] "RemoveContainer" containerID="292cc1dcad048c4d54590f6587ea4f65e53dd5f83f4235deae520cfb086277da" Jan 20 11:44:27 crc kubenswrapper[4725]: E0120 11:44:27.526840 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:44:28 crc kubenswrapper[4725]: I0120 11:44:28.099899 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:44:28 crc kubenswrapper[4725]: E0120 11:44:28.100219 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:44:39 crc kubenswrapper[4725]: I0120 11:44:39.932469 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:44:39 crc kubenswrapper[4725]: E0120 11:44:39.933144 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:44:54 crc kubenswrapper[4725]: I0120 11:44:54.932653 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:44:54 crc kubenswrapper[4725]: E0120 11:44:54.933958 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.390476 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx"] Jan 20 11:45:00 crc kubenswrapper[4725]: E0120 11:45:00.395332 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-content" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395376 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-content" Jan 20 11:45:00 crc kubenswrapper[4725]: E0120 11:45:00.395396 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-utilities" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395404 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="extract-utilities" Jan 20 11:45:00 crc kubenswrapper[4725]: E0120 11:45:00.395425 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395432 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.395591 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="42b144f9-6444-48d2-8e34-ee4ab42f3221" containerName="registry-server" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.404672 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.405013 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx"] Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.408579 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.409291 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.588062 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.588433 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.588560 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.690115 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.690225 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.690274 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.691270 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.700902 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.721844 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"collect-profiles-29481825-lmjvx\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:00 crc kubenswrapper[4725]: I0120 11:45:00.737578 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.035454 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx"] Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.304398 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerStarted","Data":"29f5fb382a65157c4331129f6528e3f9e62bf727870488a755d44c354a4f9892"} Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.304509 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerStarted","Data":"6ece4f6fab7495ec98fb9171574deaf28dccb122b438616bc7f6a16567a70ea3"} Jan 20 11:45:01 crc kubenswrapper[4725]: I0120 11:45:01.328473 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" podStartSLOduration=1.3284469269999999 podStartE2EDuration="1.328446927s" podCreationTimestamp="2026-01-20 11:45:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 11:45:01.327127515 +0000 UTC m=+2429.535449488" watchObservedRunningTime="2026-01-20 11:45:01.328446927 +0000 UTC m=+2429.536768900" Jan 20 11:45:02 crc kubenswrapper[4725]: I0120 11:45:02.313399 4725 generic.go:334] "Generic (PLEG): container finished" podID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerID="29f5fb382a65157c4331129f6528e3f9e62bf727870488a755d44c354a4f9892" exitCode=0 Jan 20 11:45:02 crc kubenswrapper[4725]: I0120 11:45:02.313660 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerDied","Data":"29f5fb382a65157c4331129f6528e3f9e62bf727870488a755d44c354a4f9892"} Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.583428 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.720343 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") pod \"5b706901-8a1e-4f91-988f-0f295b512b2b\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.720456 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") pod \"5b706901-8a1e-4f91-988f-0f295b512b2b\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.720706 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") pod \"5b706901-8a1e-4f91-988f-0f295b512b2b\" (UID: \"5b706901-8a1e-4f91-988f-0f295b512b2b\") " Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.721834 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume" (OuterVolumeSpecName: "config-volume") pod "5b706901-8a1e-4f91-988f-0f295b512b2b" (UID: "5b706901-8a1e-4f91-988f-0f295b512b2b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.722484 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5b706901-8a1e-4f91-988f-0f295b512b2b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.728223 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "5b706901-8a1e-4f91-988f-0f295b512b2b" (UID: "5b706901-8a1e-4f91-988f-0f295b512b2b"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.730497 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd" (OuterVolumeSpecName: "kube-api-access-rjlfd") pod "5b706901-8a1e-4f91-988f-0f295b512b2b" (UID: "5b706901-8a1e-4f91-988f-0f295b512b2b"). InnerVolumeSpecName "kube-api-access-rjlfd". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.824301 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-rjlfd\" (UniqueName: \"kubernetes.io/projected/5b706901-8a1e-4f91-988f-0f295b512b2b-kube-api-access-rjlfd\") on node \"crc\" DevicePath \"\"" Jan 20 11:45:03 crc kubenswrapper[4725]: I0120 11:45:03.824363 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/5b706901-8a1e-4f91-988f-0f295b512b2b-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.452959 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" event={"ID":"5b706901-8a1e-4f91-988f-0f295b512b2b","Type":"ContainerDied","Data":"6ece4f6fab7495ec98fb9171574deaf28dccb122b438616bc7f6a16567a70ea3"} Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.453733 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="6ece4f6fab7495ec98fb9171574deaf28dccb122b438616bc7f6a16567a70ea3" Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.453061 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481825-lmjvx" Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.674755 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.681436 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481780-smks9"] Jan 20 11:45:04 crc kubenswrapper[4725]: I0120 11:45:04.944343 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e2d56c6e-b9ad-4de9-8fe6-06b00293050e" path="/var/lib/kubelet/pods/e2d56c6e-b9ad-4de9-8fe6-06b00293050e/volumes" Jan 20 11:45:08 crc kubenswrapper[4725]: I0120 11:45:08.932932 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:08 crc kubenswrapper[4725]: E0120 11:45:08.935610 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:17 crc kubenswrapper[4725]: I0120 11:45:17.018317 4725 scope.go:117] "RemoveContainer" containerID="e8e7a4e36aba81c1bb4622af4c301d49b30996cd6ad2e2e0a5c6e98da1b99ab0" Jan 20 11:45:19 crc kubenswrapper[4725]: I0120 11:45:19.932635 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:19 crc kubenswrapper[4725]: E0120 11:45:19.933662 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:33 crc kubenswrapper[4725]: I0120 11:45:33.932579 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:33 crc kubenswrapper[4725]: E0120 11:45:33.933827 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:47 crc kubenswrapper[4725]: I0120 11:45:47.932231 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:47 crc kubenswrapper[4725]: E0120 11:45:47.933355 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:45:59 crc kubenswrapper[4725]: I0120 11:45:59.933045 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:45:59 crc kubenswrapper[4725]: E0120 11:45:59.934164 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:12 crc kubenswrapper[4725]: I0120 11:46:12.938475 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:12 crc kubenswrapper[4725]: E0120 11:46:12.939737 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:27 crc kubenswrapper[4725]: I0120 11:46:27.932996 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:27 crc kubenswrapper[4725]: E0120 11:46:27.934098 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:38 crc kubenswrapper[4725]: I0120 11:46:38.933990 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:38 crc kubenswrapper[4725]: E0120 11:46:38.935132 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:46:52 crc kubenswrapper[4725]: I0120 11:46:52.941847 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:46:52 crc kubenswrapper[4725]: E0120 11:46:52.945465 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:04 crc kubenswrapper[4725]: I0120 11:47:04.940600 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:04 crc kubenswrapper[4725]: E0120 11:47:04.943307 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:17 crc kubenswrapper[4725]: I0120 11:47:17.932270 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:17 crc kubenswrapper[4725]: E0120 11:47:17.933194 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.130329 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:21 crc kubenswrapper[4725]: E0120 11:47:21.131291 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerName="collect-profiles" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.131310 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerName="collect-profiles" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.131494 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5b706901-8a1e-4f91-988f-0f295b512b2b" containerName="collect-profiles" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.133026 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.161128 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.176879 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"infrawatch-operators-h4d72\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.278437 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"infrawatch-operators-h4d72\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.309575 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"infrawatch-operators-h4d72\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.463780 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.744936 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:21 crc kubenswrapper[4725]: I0120 11:47:21.762200 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:47:22 crc kubenswrapper[4725]: I0120 11:47:22.743725 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerStarted","Data":"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5"} Jan 20 11:47:22 crc kubenswrapper[4725]: I0120 11:47:22.744239 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerStarted","Data":"1ec9f71e1cb0c4d069c12c7836b2eea740de9592c2750e3aac3ee699298c3f0c"} Jan 20 11:47:22 crc kubenswrapper[4725]: I0120 11:47:22.765128 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-h4d72" podStartSLOduration=1.6058907059999998 podStartE2EDuration="1.765066742s" podCreationTimestamp="2026-01-20 11:47:21 +0000 UTC" firstStartedPulling="2026-01-20 11:47:21.761765136 +0000 UTC m=+2569.970087109" lastFinishedPulling="2026-01-20 11:47:21.920941172 +0000 UTC m=+2570.129263145" observedRunningTime="2026-01-20 11:47:22.76247466 +0000 UTC m=+2570.970796633" watchObservedRunningTime="2026-01-20 11:47:22.765066742 +0000 UTC m=+2570.973388715" Jan 20 11:47:28 crc kubenswrapper[4725]: I0120 11:47:28.933352 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:28 crc kubenswrapper[4725]: E0120 11:47:28.934481 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.464653 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.464769 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.507244 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.866429 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:31 crc kubenswrapper[4725]: I0120 11:47:31.920137 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:33 crc kubenswrapper[4725]: I0120 11:47:33.836419 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-h4d72" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" containerID="cri-o://35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" gracePeriod=2 Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.345969 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.525608 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") pod \"134d5e80-3994-4b7d-9680-4bac160108e3\" (UID: \"134d5e80-3994-4b7d-9680-4bac160108e3\") " Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.533034 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs" (OuterVolumeSpecName: "kube-api-access-cjfhs") pod "134d5e80-3994-4b7d-9680-4bac160108e3" (UID: "134d5e80-3994-4b7d-9680-4bac160108e3"). InnerVolumeSpecName "kube-api-access-cjfhs". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.628694 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-cjfhs\" (UniqueName: \"kubernetes.io/projected/134d5e80-3994-4b7d-9680-4bac160108e3-kube-api-access-cjfhs\") on node \"crc\" DevicePath \"\"" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.846399 4725 generic.go:334] "Generic (PLEG): container finished" podID="134d5e80-3994-4b7d-9680-4bac160108e3" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" exitCode=0 Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.846478 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-h4d72" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.846549 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerDied","Data":"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5"} Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.848437 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-h4d72" event={"ID":"134d5e80-3994-4b7d-9680-4bac160108e3","Type":"ContainerDied","Data":"1ec9f71e1cb0c4d069c12c7836b2eea740de9592c2750e3aac3ee699298c3f0c"} Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.848472 4725 scope.go:117] "RemoveContainer" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.873893 4725 scope.go:117] "RemoveContainer" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" Jan 20 11:47:34 crc kubenswrapper[4725]: E0120 11:47:34.874669 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5\": container with ID starting with 35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5 not found: ID does not exist" containerID="35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.874804 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5"} err="failed to get container status \"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5\": rpc error: code = NotFound desc = could not find container \"35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5\": container with ID starting with 35eecd26e007f8682c75e048d9de8382463fa5b925064ca599e37428f67494a5 not found: ID does not exist" Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.892471 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.902566 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-h4d72"] Jan 20 11:47:34 crc kubenswrapper[4725]: I0120 11:47:34.942336 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" path="/var/lib/kubelet/pods/134d5e80-3994-4b7d-9680-4bac160108e3/volumes" Jan 20 11:47:41 crc kubenswrapper[4725]: I0120 11:47:41.932722 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:41 crc kubenswrapper[4725]: E0120 11:47:41.934170 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:47:53 crc kubenswrapper[4725]: I0120 11:47:53.932420 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:47:53 crc kubenswrapper[4725]: E0120 11:47:53.933588 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:07 crc kubenswrapper[4725]: I0120 11:48:07.933189 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:07 crc kubenswrapper[4725]: E0120 11:48:07.934516 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:21 crc kubenswrapper[4725]: I0120 11:48:21.932927 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:21 crc kubenswrapper[4725]: E0120 11:48:21.934554 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:32 crc kubenswrapper[4725]: I0120 11:48:32.952016 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:32 crc kubenswrapper[4725]: E0120 11:48:32.953491 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:44 crc kubenswrapper[4725]: I0120 11:48:44.932426 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:44 crc kubenswrapper[4725]: E0120 11:48:44.933599 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:48:58 crc kubenswrapper[4725]: I0120 11:48:58.933641 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:48:58 crc kubenswrapper[4725]: E0120 11:48:58.934819 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:49:10 crc kubenswrapper[4725]: I0120 11:49:10.938517 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:49:10 crc kubenswrapper[4725]: E0120 11:49:10.940674 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:49:23 crc kubenswrapper[4725]: I0120 11:49:23.932533 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:49:23 crc kubenswrapper[4725]: E0120 11:49:23.933726 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:49:38 crc kubenswrapper[4725]: I0120 11:49:38.932917 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:49:39 crc kubenswrapper[4725]: I0120 11:49:39.343748 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030"} Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.680722 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:49:44 crc kubenswrapper[4725]: E0120 11:49:44.682372 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.682396 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.682609 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="134d5e80-3994-4b7d-9680-4bac160108e3" containerName="registry-server" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.684191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.706171 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.794533 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.794716 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.794754 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.896489 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.896575 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.896622 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.897675 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.897718 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:44 crc kubenswrapper[4725]: I0120 11:49:44.927697 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"redhat-operators-vvprb\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:45 crc kubenswrapper[4725]: I0120 11:49:45.006629 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:45 crc kubenswrapper[4725]: I0120 11:49:45.338704 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:49:45 crc kubenswrapper[4725]: I0120 11:49:45.396086 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerStarted","Data":"90a960928f25c322f34742bfaafed232a0042a646b8514ff0a1281c50bb598a7"} Jan 20 11:49:46 crc kubenswrapper[4725]: I0120 11:49:46.422072 4725 generic.go:334] "Generic (PLEG): container finished" podID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" exitCode=0 Jan 20 11:49:46 crc kubenswrapper[4725]: I0120 11:49:46.422402 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b"} Jan 20 11:49:47 crc kubenswrapper[4725]: I0120 11:49:47.450403 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerStarted","Data":"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b"} Jan 20 11:49:49 crc kubenswrapper[4725]: I0120 11:49:49.469760 4725 generic.go:334] "Generic (PLEG): container finished" podID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" exitCode=0 Jan 20 11:49:49 crc kubenswrapper[4725]: I0120 11:49:49.470059 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b"} Jan 20 11:49:50 crc kubenswrapper[4725]: I0120 11:49:50.484428 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerStarted","Data":"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4"} Jan 20 11:49:50 crc kubenswrapper[4725]: I0120 11:49:50.508581 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-vvprb" podStartSLOduration=2.7538572500000003 podStartE2EDuration="6.508535065s" podCreationTimestamp="2026-01-20 11:49:44 +0000 UTC" firstStartedPulling="2026-01-20 11:49:46.427025922 +0000 UTC m=+2714.635347885" lastFinishedPulling="2026-01-20 11:49:50.181703727 +0000 UTC m=+2718.390025700" observedRunningTime="2026-01-20 11:49:50.504840209 +0000 UTC m=+2718.713162182" watchObservedRunningTime="2026-01-20 11:49:50.508535065 +0000 UTC m=+2718.716857028" Jan 20 11:49:55 crc kubenswrapper[4725]: I0120 11:49:55.007463 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:55 crc kubenswrapper[4725]: I0120 11:49:55.008356 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:49:56 crc kubenswrapper[4725]: I0120 11:49:56.061411 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-vvprb" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" probeResult="failure" output=< Jan 20 11:49:56 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 11:49:56 crc kubenswrapper[4725]: > Jan 20 11:50:05 crc kubenswrapper[4725]: I0120 11:50:05.053940 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:05 crc kubenswrapper[4725]: I0120 11:50:05.101391 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:05 crc kubenswrapper[4725]: I0120 11:50:05.303700 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:50:06 crc kubenswrapper[4725]: I0120 11:50:06.629556 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-vvprb" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" containerID="cri-o://25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" gracePeriod=2 Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.136728 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.259895 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") pod \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.260188 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") pod \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.261064 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") pod \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\" (UID: \"dce5b8ba-279b-46b4-a0df-e8b73a0cb582\") " Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.262344 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities" (OuterVolumeSpecName: "utilities") pod "dce5b8ba-279b-46b4-a0df-e8b73a0cb582" (UID: "dce5b8ba-279b-46b4-a0df-e8b73a0cb582"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.267748 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l" (OuterVolumeSpecName: "kube-api-access-g2n9l") pod "dce5b8ba-279b-46b4-a0df-e8b73a0cb582" (UID: "dce5b8ba-279b-46b4-a0df-e8b73a0cb582"). InnerVolumeSpecName "kube-api-access-g2n9l". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.363416 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g2n9l\" (UniqueName: \"kubernetes.io/projected/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-kube-api-access-g2n9l\") on node \"crc\" DevicePath \"\"" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.363463 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.385644 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dce5b8ba-279b-46b4-a0df-e8b73a0cb582" (UID: "dce5b8ba-279b-46b4-a0df-e8b73a0cb582"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.465272 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dce5b8ba-279b-46b4-a0df-e8b73a0cb582-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648477 4725 generic.go:334] "Generic (PLEG): container finished" podID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" exitCode=0 Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648546 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4"} Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648566 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-vvprb" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648593 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-vvprb" event={"ID":"dce5b8ba-279b-46b4-a0df-e8b73a0cb582","Type":"ContainerDied","Data":"90a960928f25c322f34742bfaafed232a0042a646b8514ff0a1281c50bb598a7"} Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.648619 4725 scope.go:117] "RemoveContainer" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.690853 4725 scope.go:117] "RemoveContainer" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.696911 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.706925 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-vvprb"] Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.726104 4725 scope.go:117] "RemoveContainer" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.747602 4725 scope.go:117] "RemoveContainer" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" Jan 20 11:50:08 crc kubenswrapper[4725]: E0120 11:50:08.748547 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4\": container with ID starting with 25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4 not found: ID does not exist" containerID="25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.748722 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4"} err="failed to get container status \"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4\": rpc error: code = NotFound desc = could not find container \"25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4\": container with ID starting with 25188fa8478892d146d4e27d961c4eeecabf38c07b7c4c896095fa460c7bfbe4 not found: ID does not exist" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.748781 4725 scope.go:117] "RemoveContainer" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" Jan 20 11:50:08 crc kubenswrapper[4725]: E0120 11:50:08.749629 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b\": container with ID starting with c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b not found: ID does not exist" containerID="c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.749743 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b"} err="failed to get container status \"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b\": rpc error: code = NotFound desc = could not find container \"c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b\": container with ID starting with c47c62c3bff380af5f15775be2b4415806b8bf40d13aaa5c400101577c5a642b not found: ID does not exist" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.749794 4725 scope.go:117] "RemoveContainer" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" Jan 20 11:50:08 crc kubenswrapper[4725]: E0120 11:50:08.750241 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b\": container with ID starting with 5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b not found: ID does not exist" containerID="5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.750273 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b"} err="failed to get container status \"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b\": rpc error: code = NotFound desc = could not find container \"5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b\": container with ID starting with 5c40fcda326e30d5bffb4b535082d0fc65a2ae282b80583b62a2acf027249b9b not found: ID does not exist" Jan 20 11:50:08 crc kubenswrapper[4725]: I0120 11:50:08.942498 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" path="/var/lib/kubelet/pods/dce5b8ba-279b-46b4-a0df-e8b73a0cb582/volumes" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.685006 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:32 crc kubenswrapper[4725]: E0120 11:51:32.686460 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-content" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686485 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-content" Jan 20 11:51:32 crc kubenswrapper[4725]: E0120 11:51:32.686513 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-utilities" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686523 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="extract-utilities" Jan 20 11:51:32 crc kubenswrapper[4725]: E0120 11:51:32.686547 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686558 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.686752 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dce5b8ba-279b-46b4-a0df-e8b73a0cb582" containerName="registry-server" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.688222 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.698335 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.698886 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.698920 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.710190 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.800736 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801112 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801146 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801752 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.801890 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:32 crc kubenswrapper[4725]: I0120 11:51:32.823893 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"certified-operators-zt2mx\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.021285 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.360938 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.636051 4725 generic.go:334] "Generic (PLEG): container finished" podID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerID="439c27da629e8d548ff2341cd19df7c6cd9c5bb048de7df33d00c7d90b2ae60c" exitCode=0 Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.636139 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"439c27da629e8d548ff2341cd19df7c6cd9c5bb048de7df33d00c7d90b2ae60c"} Jan 20 11:51:33 crc kubenswrapper[4725]: I0120 11:51:33.636173 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerStarted","Data":"b8eb4165aba353118b0eaefaaf0a753011eb612f99ca1e758ca08b0d1b5df660"} Jan 20 11:51:34 crc kubenswrapper[4725]: I0120 11:51:34.647911 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerStarted","Data":"eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75"} Jan 20 11:51:35 crc kubenswrapper[4725]: I0120 11:51:35.659288 4725 generic.go:334] "Generic (PLEG): container finished" podID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerID="eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75" exitCode=0 Jan 20 11:51:35 crc kubenswrapper[4725]: I0120 11:51:35.659352 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75"} Jan 20 11:51:36 crc kubenswrapper[4725]: I0120 11:51:36.677110 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerStarted","Data":"989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036"} Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.022701 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.023749 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.091817 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.112815 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-zt2mx" podStartSLOduration=8.648207774 podStartE2EDuration="11.112786507s" podCreationTimestamp="2026-01-20 11:51:32 +0000 UTC" firstStartedPulling="2026-01-20 11:51:33.639198211 +0000 UTC m=+2821.847520184" lastFinishedPulling="2026-01-20 11:51:36.103776944 +0000 UTC m=+2824.312098917" observedRunningTime="2026-01-20 11:51:36.709132399 +0000 UTC m=+2824.917454382" watchObservedRunningTime="2026-01-20 11:51:43.112786507 +0000 UTC m=+2831.321108480" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.800124 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:43 crc kubenswrapper[4725]: I0120 11:51:43.856229 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:45 crc kubenswrapper[4725]: I0120 11:51:45.767696 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-zt2mx" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" containerID="cri-o://989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036" gracePeriod=2 Jan 20 11:51:46 crc kubenswrapper[4725]: I0120 11:51:46.778263 4725 generic.go:334] "Generic (PLEG): container finished" podID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerID="989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036" exitCode=0 Jan 20 11:51:46 crc kubenswrapper[4725]: I0120 11:51:46.778339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036"} Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.322940 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.482838 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") pod \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.482942 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") pod \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.483173 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") pod \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\" (UID: \"61a768f0-365b-431a-88fc-22a3f6c9ec4b\") " Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.484309 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities" (OuterVolumeSpecName: "utilities") pod "61a768f0-365b-431a-88fc-22a3f6c9ec4b" (UID: "61a768f0-365b-431a-88fc-22a3f6c9ec4b"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.503974 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf" (OuterVolumeSpecName: "kube-api-access-hsfzf") pod "61a768f0-365b-431a-88fc-22a3f6c9ec4b" (UID: "61a768f0-365b-431a-88fc-22a3f6c9ec4b"). InnerVolumeSpecName "kube-api-access-hsfzf". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.562560 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "61a768f0-365b-431a-88fc-22a3f6c9ec4b" (UID: "61a768f0-365b-431a-88fc-22a3f6c9ec4b"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.587506 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.587558 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hsfzf\" (UniqueName: \"kubernetes.io/projected/61a768f0-365b-431a-88fc-22a3f6c9ec4b-kube-api-access-hsfzf\") on node \"crc\" DevicePath \"\"" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.587569 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/61a768f0-365b-431a-88fc-22a3f6c9ec4b-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.791657 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-zt2mx" event={"ID":"61a768f0-365b-431a-88fc-22a3f6c9ec4b","Type":"ContainerDied","Data":"b8eb4165aba353118b0eaefaaf0a753011eb612f99ca1e758ca08b0d1b5df660"} Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.791759 4725 scope.go:117] "RemoveContainer" containerID="989165e4d588af83988f7e26bee921d2095aac7f4d5275cecb8e7b6ca3c70036" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.791817 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-zt2mx" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.818574 4725 scope.go:117] "RemoveContainer" containerID="eba18d43c98871c9f3643d4810df2fe92638eb33812632d2591731c4c56f9b75" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.841341 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.848117 4725 scope.go:117] "RemoveContainer" containerID="439c27da629e8d548ff2341cd19df7c6cd9c5bb048de7df33d00c7d90b2ae60c" Jan 20 11:51:47 crc kubenswrapper[4725]: I0120 11:51:47.848950 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-zt2mx"] Jan 20 11:51:48 crc kubenswrapper[4725]: I0120 11:51:48.953986 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" path="/var/lib/kubelet/pods/61a768f0-365b-431a-88fc-22a3f6c9ec4b/volumes" Jan 20 11:51:56 crc kubenswrapper[4725]: I0120 11:51:56.728007 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:51:56 crc kubenswrapper[4725]: I0120 11:51:56.728988 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:52:26 crc kubenswrapper[4725]: I0120 11:52:26.727675 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:52:26 crc kubenswrapper[4725]: I0120 11:52:26.728779 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.728485 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.729558 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.729649 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.730820 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:52:56 crc kubenswrapper[4725]: I0120 11:52:56.730975 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030" gracePeriod=600 Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.505629 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030" exitCode=0 Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.505710 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030"} Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.506607 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a"} Jan 20 11:52:57 crc kubenswrapper[4725]: I0120 11:52:57.506642 4725 scope.go:117] "RemoveContainer" containerID="2e54d531de92eca8479cd3c6d5dbae0e9f37e357b12af4ef6a5623615ca0bb16" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.410618 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:00 crc kubenswrapper[4725]: E0120 11:53:00.411671 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411700 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" Jan 20 11:53:00 crc kubenswrapper[4725]: E0120 11:53:00.411727 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-utilities" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411736 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-utilities" Jan 20 11:53:00 crc kubenswrapper[4725]: E0120 11:53:00.411748 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-content" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411757 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="extract-content" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.411921 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="61a768f0-365b-431a-88fc-22a3f6c9ec4b" containerName="registry-server" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.413453 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.423267 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.571292 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.571375 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.571453 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.672657 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.672742 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.672774 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.673519 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.673518 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.702260 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"community-operators-pkr8m\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:00 crc kubenswrapper[4725]: I0120 11:53:00.733910 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.277683 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.549814 4725 generic.go:334] "Generic (PLEG): container finished" podID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" exitCode=0 Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.549926 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed"} Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.550009 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerStarted","Data":"389cf9eb3a31670eadbf0da4f7f3b31dee04694d0d2ded89763aaf5965f02fd2"} Jan 20 11:53:01 crc kubenswrapper[4725]: I0120 11:53:01.552237 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:53:03 crc kubenswrapper[4725]: I0120 11:53:03.581813 4725 generic.go:334] "Generic (PLEG): container finished" podID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" exitCode=0 Jan 20 11:53:03 crc kubenswrapper[4725]: I0120 11:53:03.582226 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743"} Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.001503 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.003036 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.014623 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.127761 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"infrawatch-operators-9qrfz\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.229773 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"infrawatch-operators-9qrfz\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.253529 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"infrawatch-operators-9qrfz\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.323368 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.558638 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:05 crc kubenswrapper[4725]: W0120 11:53:05.569387 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2c46661_7c6f_442f_af6c_6c0d71674631.slice/crio-5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93 WatchSource:0}: Error finding container 5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93: Status 404 returned error can't find the container with id 5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93 Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.630979 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerStarted","Data":"5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93"} Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.634711 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerStarted","Data":"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8"} Jan 20 11:53:05 crc kubenswrapper[4725]: I0120 11:53:05.657066 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-pkr8m" podStartSLOduration=2.780000518 podStartE2EDuration="5.657038507s" podCreationTimestamp="2026-01-20 11:53:00 +0000 UTC" firstStartedPulling="2026-01-20 11:53:01.551823547 +0000 UTC m=+2909.760145530" lastFinishedPulling="2026-01-20 11:53:04.428861536 +0000 UTC m=+2912.637183519" observedRunningTime="2026-01-20 11:53:05.65587089 +0000 UTC m=+2913.864192873" watchObservedRunningTime="2026-01-20 11:53:05.657038507 +0000 UTC m=+2913.865360470" Jan 20 11:53:06 crc kubenswrapper[4725]: I0120 11:53:06.647700 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerStarted","Data":"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364"} Jan 20 11:53:06 crc kubenswrapper[4725]: I0120 11:53:06.676466 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-9qrfz" podStartSLOduration=2.531157291 podStartE2EDuration="2.676436419s" podCreationTimestamp="2026-01-20 11:53:04 +0000 UTC" firstStartedPulling="2026-01-20 11:53:05.571880764 +0000 UTC m=+2913.780202737" lastFinishedPulling="2026-01-20 11:53:05.717159892 +0000 UTC m=+2913.925481865" observedRunningTime="2026-01-20 11:53:06.673161466 +0000 UTC m=+2914.881483439" watchObservedRunningTime="2026-01-20 11:53:06.676436419 +0000 UTC m=+2914.884758392" Jan 20 11:53:10 crc kubenswrapper[4725]: I0120 11:53:10.735008 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:10 crc kubenswrapper[4725]: I0120 11:53:10.736714 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:10 crc kubenswrapper[4725]: I0120 11:53:10.784897 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:11 crc kubenswrapper[4725]: I0120 11:53:11.750874 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:14 crc kubenswrapper[4725]: I0120 11:53:14.381483 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:14 crc kubenswrapper[4725]: I0120 11:53:14.733046 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-pkr8m" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" containerID="cri-o://7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" gracePeriod=2 Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.323800 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.323885 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.360122 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.629806 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.735137 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") pod \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.735266 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") pod \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.735317 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") pod \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\" (UID: \"07e24694-fcc0-41b2-9576-fd0c86d1dca3\") " Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.738589 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities" (OuterVolumeSpecName: "utilities") pod "07e24694-fcc0-41b2-9576-fd0c86d1dca3" (UID: "07e24694-fcc0-41b2-9576-fd0c86d1dca3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.743451 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm" (OuterVolumeSpecName: "kube-api-access-zlvrm") pod "07e24694-fcc0-41b2-9576-fd0c86d1dca3" (UID: "07e24694-fcc0-41b2-9576-fd0c86d1dca3"). InnerVolumeSpecName "kube-api-access-zlvrm". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752041 4725 generic.go:334] "Generic (PLEG): container finished" podID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" exitCode=0 Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752094 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-pkr8m" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752205 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8"} Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752244 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-pkr8m" event={"ID":"07e24694-fcc0-41b2-9576-fd0c86d1dca3","Type":"ContainerDied","Data":"389cf9eb3a31670eadbf0da4f7f3b31dee04694d0d2ded89763aaf5965f02fd2"} Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.752267 4725 scope.go:117] "RemoveContainer" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.787932 4725 scope.go:117] "RemoveContainer" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.790558 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.794927 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "07e24694-fcc0-41b2-9576-fd0c86d1dca3" (UID: "07e24694-fcc0-41b2-9576-fd0c86d1dca3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.818005 4725 scope.go:117] "RemoveContainer" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.836301 4725 scope.go:117] "RemoveContainer" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" Jan 20 11:53:15 crc kubenswrapper[4725]: E0120 11:53:15.836966 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8\": container with ID starting with 7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8 not found: ID does not exist" containerID="7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837010 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8"} err="failed to get container status \"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8\": rpc error: code = NotFound desc = could not find container \"7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8\": container with ID starting with 7d345b1ac23a2a32278cf549cb1868e322695a2fd5111b8eda889ec9743fb4d8 not found: ID does not exist" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837038 4725 scope.go:117] "RemoveContainer" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" Jan 20 11:53:15 crc kubenswrapper[4725]: E0120 11:53:15.837388 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743\": container with ID starting with 02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743 not found: ID does not exist" containerID="02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837405 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837449 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-zlvrm\" (UniqueName: \"kubernetes.io/projected/07e24694-fcc0-41b2-9576-fd0c86d1dca3-kube-api-access-zlvrm\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837464 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/07e24694-fcc0-41b2-9576-fd0c86d1dca3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837417 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743"} err="failed to get container status \"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743\": rpc error: code = NotFound desc = could not find container \"02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743\": container with ID starting with 02f46186e291387bcfd42cf8a34e9a7797e758d9b5f066e27d263e79e3f5f743 not found: ID does not exist" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.837497 4725 scope.go:117] "RemoveContainer" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" Jan 20 11:53:15 crc kubenswrapper[4725]: E0120 11:53:15.839196 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed\": container with ID starting with da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed not found: ID does not exist" containerID="da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed" Jan 20 11:53:15 crc kubenswrapper[4725]: I0120 11:53:15.839250 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed"} err="failed to get container status \"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed\": rpc error: code = NotFound desc = could not find container \"da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed\": container with ID starting with da22ef86a1ac0a76f9b7b69d55ab71f455aa7e22dad0ee0a44eaac9af5c1a2ed not found: ID does not exist" Jan 20 11:53:16 crc kubenswrapper[4725]: I0120 11:53:16.140584 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:16 crc kubenswrapper[4725]: I0120 11:53:16.145837 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-pkr8m"] Jan 20 11:53:16 crc kubenswrapper[4725]: I0120 11:53:16.941221 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" path="/var/lib/kubelet/pods/07e24694-fcc0-41b2-9576-fd0c86d1dca3/volumes" Jan 20 11:53:18 crc kubenswrapper[4725]: I0120 11:53:18.983447 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:18 crc kubenswrapper[4725]: I0120 11:53:18.983804 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-9qrfz" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" containerID="cri-o://d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" gracePeriod=2 Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.378907 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.405381 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") pod \"b2c46661-7c6f-442f-af6c-6c0d71674631\" (UID: \"b2c46661-7c6f-442f-af6c-6c0d71674631\") " Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.414591 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv" (OuterVolumeSpecName: "kube-api-access-lrjcv") pod "b2c46661-7c6f-442f-af6c-6c0d71674631" (UID: "b2c46661-7c6f-442f-af6c-6c0d71674631"). InnerVolumeSpecName "kube-api-access-lrjcv". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.508470 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-lrjcv\" (UniqueName: \"kubernetes.io/projected/b2c46661-7c6f-442f-af6c-6c0d71674631-kube-api-access-lrjcv\") on node \"crc\" DevicePath \"\"" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793421 4725 generic.go:334] "Generic (PLEG): container finished" podID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" exitCode=0 Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793514 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-9qrfz" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793545 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerDied","Data":"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364"} Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793626 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-9qrfz" event={"ID":"b2c46661-7c6f-442f-af6c-6c0d71674631","Type":"ContainerDied","Data":"5ef32346829112d72e899852a1d0bb268526381a6a96a8182d689bd76e615a93"} Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.793656 4725 scope.go:117] "RemoveContainer" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.819043 4725 scope.go:117] "RemoveContainer" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" Jan 20 11:53:19 crc kubenswrapper[4725]: E0120 11:53:19.819822 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364\": container with ID starting with d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364 not found: ID does not exist" containerID="d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.819904 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364"} err="failed to get container status \"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364\": rpc error: code = NotFound desc = could not find container \"d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364\": container with ID starting with d0d05e6f5f1461e5cc4350ffbe1cee7ed58116cb168d47c7bcac55d852088364 not found: ID does not exist" Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.835404 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:19 crc kubenswrapper[4725]: I0120 11:53:19.845178 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-9qrfz"] Jan 20 11:53:20 crc kubenswrapper[4725]: I0120 11:53:20.941175 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" path="/var/lib/kubelet/pods/b2c46661-7c6f-442f-af6c-6c0d71674631/volumes" Jan 20 11:55:26 crc kubenswrapper[4725]: I0120 11:55:26.728549 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:55:26 crc kubenswrapper[4725]: I0120 11:55:26.729761 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:55:56 crc kubenswrapper[4725]: I0120 11:55:56.727943 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:55:56 crc kubenswrapper[4725]: I0120 11:55:56.728688 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.727541 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.728032 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.728150 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.728998 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.729105 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" gracePeriod=600 Jan 20 11:56:26 crc kubenswrapper[4725]: E0120 11:56:26.865040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.876624 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" exitCode=0 Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.876687 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a"} Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.876745 4725 scope.go:117] "RemoveContainer" containerID="b487c1da831952b6be0c22d02ea25935db4d2c51c76529ece3ca853c064d9030" Jan 20 11:56:26 crc kubenswrapper[4725]: I0120 11:56:26.877885 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:56:26 crc kubenswrapper[4725]: E0120 11:56:26.878819 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:56:37 crc kubenswrapper[4725]: I0120 11:56:37.933551 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:56:37 crc kubenswrapper[4725]: E0120 11:56:37.934508 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:56:49 crc kubenswrapper[4725]: I0120 11:56:49.932659 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:56:49 crc kubenswrapper[4725]: E0120 11:56:49.933815 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:03 crc kubenswrapper[4725]: I0120 11:57:03.932920 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:03 crc kubenswrapper[4725]: E0120 11:57:03.934017 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:18 crc kubenswrapper[4725]: I0120 11:57:18.937517 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:18 crc kubenswrapper[4725]: E0120 11:57:18.940660 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:31 crc kubenswrapper[4725]: I0120 11:57:31.933292 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:31 crc kubenswrapper[4725]: E0120 11:57:31.934357 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:57:46 crc kubenswrapper[4725]: I0120 11:57:46.933295 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:57:46 crc kubenswrapper[4725]: E0120 11:57:46.934405 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:00 crc kubenswrapper[4725]: I0120 11:58:00.932865 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:00 crc kubenswrapper[4725]: E0120 11:58:00.934037 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:14 crc kubenswrapper[4725]: I0120 11:58:14.931908 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:14 crc kubenswrapper[4725]: E0120 11:58:14.933134 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:28 crc kubenswrapper[4725]: I0120 11:58:28.933203 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:28 crc kubenswrapper[4725]: E0120 11:58:28.934362 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.105350 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106427 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106447 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106476 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-content" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106485 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-content" Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106497 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-utilities" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106507 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="extract-utilities" Jan 20 11:58:33 crc kubenswrapper[4725]: E0120 11:58:33.106531 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106538 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106769 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="07e24694-fcc0-41b2-9576-fd0c86d1dca3" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.106792 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="b2c46661-7c6f-442f-af6c-6c0d71674631" containerName="registry-server" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.107558 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.113222 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.286123 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"infrawatch-operators-szgzx\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.388194 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"infrawatch-operators-szgzx\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.413459 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"infrawatch-operators-szgzx\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.438131 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.715350 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:33 crc kubenswrapper[4725]: I0120 11:58:33.726737 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 11:58:34 crc kubenswrapper[4725]: I0120 11:58:34.069067 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerStarted","Data":"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0"} Jan 20 11:58:34 crc kubenswrapper[4725]: I0120 11:58:34.069190 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerStarted","Data":"10f8f1370bf7c297a0eefd23ad5a3a876be0d75b5983ab22ccd39fceb71cea67"} Jan 20 11:58:34 crc kubenswrapper[4725]: I0120 11:58:34.106124 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-szgzx" podStartSLOduration=0.985034207 podStartE2EDuration="1.106090985s" podCreationTimestamp="2026-01-20 11:58:33 +0000 UTC" firstStartedPulling="2026-01-20 11:58:33.726318327 +0000 UTC m=+3241.934640300" lastFinishedPulling="2026-01-20 11:58:33.847375105 +0000 UTC m=+3242.055697078" observedRunningTime="2026-01-20 11:58:34.090580286 +0000 UTC m=+3242.298902259" watchObservedRunningTime="2026-01-20 11:58:34.106090985 +0000 UTC m=+3242.314412958" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.438665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.440245 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.473161 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:43 crc kubenswrapper[4725]: I0120 11:58:43.932574 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:43 crc kubenswrapper[4725]: E0120 11:58:43.932895 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:58:44 crc kubenswrapper[4725]: I0120 11:58:44.245715 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:45 crc kubenswrapper[4725]: I0120 11:58:45.874996 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.237656 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-szgzx" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" containerID="cri-o://0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" gracePeriod=2 Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.616972 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.776474 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") pod \"ff83a417-3909-4bf5-9300-40129abe7ad3\" (UID: \"ff83a417-3909-4bf5-9300-40129abe7ad3\") " Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.785226 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl" (OuterVolumeSpecName: "kube-api-access-pjvwl") pod "ff83a417-3909-4bf5-9300-40129abe7ad3" (UID: "ff83a417-3909-4bf5-9300-40129abe7ad3"). InnerVolumeSpecName "kube-api-access-pjvwl". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 11:58:47 crc kubenswrapper[4725]: I0120 11:58:47.879442 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-pjvwl\" (UniqueName: \"kubernetes.io/projected/ff83a417-3909-4bf5-9300-40129abe7ad3-kube-api-access-pjvwl\") on node \"crc\" DevicePath \"\"" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.249207 4725 generic.go:334] "Generic (PLEG): container finished" podID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" exitCode=0 Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.249453 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerDied","Data":"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0"} Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.251046 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-szgzx" event={"ID":"ff83a417-3909-4bf5-9300-40129abe7ad3","Type":"ContainerDied","Data":"10f8f1370bf7c297a0eefd23ad5a3a876be0d75b5983ab22ccd39fceb71cea67"} Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.249554 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-szgzx" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.251227 4725 scope.go:117] "RemoveContainer" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.279821 4725 scope.go:117] "RemoveContainer" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" Jan 20 11:58:48 crc kubenswrapper[4725]: E0120 11:58:48.281500 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0\": container with ID starting with 0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0 not found: ID does not exist" containerID="0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.281548 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0"} err="failed to get container status \"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0\": rpc error: code = NotFound desc = could not find container \"0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0\": container with ID starting with 0c538df5b648e4853108875e8d3249cd28b2664ce5c30e8c46dbe5cd7eab68f0 not found: ID does not exist" Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.306558 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.314286 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-szgzx"] Jan 20 11:58:48 crc kubenswrapper[4725]: I0120 11:58:48.950281 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" path="/var/lib/kubelet/pods/ff83a417-3909-4bf5-9300-40129abe7ad3/volumes" Jan 20 11:58:56 crc kubenswrapper[4725]: I0120 11:58:56.937739 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:58:56 crc kubenswrapper[4725]: E0120 11:58:56.939060 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:09 crc kubenswrapper[4725]: I0120 11:59:09.932050 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:09 crc kubenswrapper[4725]: E0120 11:59:09.933254 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:24 crc kubenswrapper[4725]: I0120 11:59:24.932957 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:24 crc kubenswrapper[4725]: E0120 11:59:24.934129 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:37 crc kubenswrapper[4725]: I0120 11:59:37.932289 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:37 crc kubenswrapper[4725]: E0120 11:59:37.933358 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:49 crc kubenswrapper[4725]: I0120 11:59:49.932593 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 11:59:49 crc kubenswrapper[4725]: E0120 11:59:49.935125 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.361923 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 11:59:57 crc kubenswrapper[4725]: E0120 11:59:57.363418 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.363441 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.363642 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="ff83a417-3909-4bf5-9300-40129abe7ad3" containerName="registry-server" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.365105 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.376820 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.467046 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.467136 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.467158 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.569687 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.569758 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.569917 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.570451 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.570744 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.597055 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"redhat-operators-c5ntc\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.698829 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.984764 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 11:59:57 crc kubenswrapper[4725]: I0120 11:59:57.998556 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerStarted","Data":"392e1ebcc5bf26919da577c07af527e8b8dccf334936990b1cd4a156ea61f191"} Jan 20 11:59:59 crc kubenswrapper[4725]: I0120 11:59:59.009726 4725 generic.go:334] "Generic (PLEG): container finished" podID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" exitCode=0 Jan 20 11:59:59 crc kubenswrapper[4725]: I0120 11:59:59.009850 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383"} Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.028974 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerStarted","Data":"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045"} Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.149347 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942"] Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.151396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.160440 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.160440 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.175097 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942"] Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.268057 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.268135 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.268587 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.370476 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.370617 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.370642 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.376476 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.389895 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.397219 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"collect-profiles-29481840-t9942\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.490434 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:00 crc kubenswrapper[4725]: I0120 12:00:00.950321 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942"] Jan 20 12:00:01 crc kubenswrapper[4725]: I0120 12:00:01.040323 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" event={"ID":"2554af70-a48f-4921-a6a6-407016260425","Type":"ContainerStarted","Data":"2c0425d87fc1b48fbc261bcffbbc7b2f08b74a79c6a7a3781b51817e41fde95d"} Jan 20 12:00:01 crc kubenswrapper[4725]: I0120 12:00:01.933226 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:01 crc kubenswrapper[4725]: E0120 12:00:01.934151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.051472 4725 generic.go:334] "Generic (PLEG): container finished" podID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" exitCode=0 Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.051584 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045"} Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.057432 4725 generic.go:334] "Generic (PLEG): container finished" podID="2554af70-a48f-4921-a6a6-407016260425" containerID="10ce9079465756a929b5da70283bfeabe7bc38f9a8f2768b4b30865ed5b9c3cd" exitCode=0 Jan 20 12:00:02 crc kubenswrapper[4725]: I0120 12:00:02.057508 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" event={"ID":"2554af70-a48f-4921-a6a6-407016260425","Type":"ContainerDied","Data":"10ce9079465756a929b5da70283bfeabe7bc38f9a8f2768b4b30865ed5b9c3cd"} Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.349474 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.527817 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") pod \"2554af70-a48f-4921-a6a6-407016260425\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.527912 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") pod \"2554af70-a48f-4921-a6a6-407016260425\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.527966 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") pod \"2554af70-a48f-4921-a6a6-407016260425\" (UID: \"2554af70-a48f-4921-a6a6-407016260425\") " Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.528883 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume" (OuterVolumeSpecName: "config-volume") pod "2554af70-a48f-4921-a6a6-407016260425" (UID: "2554af70-a48f-4921-a6a6-407016260425"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.534567 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "2554af70-a48f-4921-a6a6-407016260425" (UID: "2554af70-a48f-4921-a6a6-407016260425"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.535245 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5" (OuterVolumeSpecName: "kube-api-access-8mbm5") pod "2554af70-a48f-4921-a6a6-407016260425" (UID: "2554af70-a48f-4921-a6a6-407016260425"). InnerVolumeSpecName "kube-api-access-8mbm5". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.630462 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/2554af70-a48f-4921-a6a6-407016260425-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.630548 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mbm5\" (UniqueName: \"kubernetes.io/projected/2554af70-a48f-4921-a6a6-407016260425-kube-api-access-8mbm5\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:03 crc kubenswrapper[4725]: I0120 12:00:03.630561 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/2554af70-a48f-4921-a6a6-407016260425-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.079461 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerStarted","Data":"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3"} Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.083362 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" event={"ID":"2554af70-a48f-4921-a6a6-407016260425","Type":"ContainerDied","Data":"2c0425d87fc1b48fbc261bcffbbc7b2f08b74a79c6a7a3781b51817e41fde95d"} Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.083398 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2c0425d87fc1b48fbc261bcffbbc7b2f08b74a79c6a7a3781b51817e41fde95d" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.083457 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481840-t9942" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.117502 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-c5ntc" podStartSLOduration=2.716132135 podStartE2EDuration="7.11731471s" podCreationTimestamp="2026-01-20 11:59:57 +0000 UTC" firstStartedPulling="2026-01-20 11:59:59.012231974 +0000 UTC m=+3327.220553947" lastFinishedPulling="2026-01-20 12:00:03.413414549 +0000 UTC m=+3331.621736522" observedRunningTime="2026-01-20 12:00:04.109758952 +0000 UTC m=+3332.318080935" watchObservedRunningTime="2026-01-20 12:00:04.11731471 +0000 UTC m=+3332.325636683" Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.435619 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.443335 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481795-mbt22"] Jan 20 12:00:04 crc kubenswrapper[4725]: I0120 12:00:04.944308 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7a41df2e-87f8-4dc4-a80c-36bd1bac44aa" path="/var/lib/kubelet/pods/7a41df2e-87f8-4dc4-a80c-36bd1bac44aa/volumes" Jan 20 12:00:07 crc kubenswrapper[4725]: I0120 12:00:07.699941 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:07 crc kubenswrapper[4725]: I0120 12:00:07.700549 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:08 crc kubenswrapper[4725]: I0120 12:00:08.758022 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-c5ntc" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" probeResult="failure" output=< Jan 20 12:00:08 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 12:00:08 crc kubenswrapper[4725]: > Jan 20 12:00:14 crc kubenswrapper[4725]: I0120 12:00:14.933633 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:14 crc kubenswrapper[4725]: E0120 12:00:14.934943 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:17 crc kubenswrapper[4725]: I0120 12:00:17.501218 4725 scope.go:117] "RemoveContainer" containerID="df134c08a91a6b779bd70a8a4d9a198b2216cb01743c5cae1cc33fd6809cfc61" Jan 20 12:00:17 crc kubenswrapper[4725]: I0120 12:00:17.752448 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:17 crc kubenswrapper[4725]: I0120 12:00:17.806885 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:18 crc kubenswrapper[4725]: I0120 12:00:18.002356 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.211983 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-c5ntc" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" containerID="cri-o://7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" gracePeriod=2 Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.636940 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.829796 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") pod \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.830120 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") pod \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.830192 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") pod \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\" (UID: \"70c7db0b-067f-4c18-85c3-2a7cafffd47f\") " Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.831251 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities" (OuterVolumeSpecName: "utilities") pod "70c7db0b-067f-4c18-85c3-2a7cafffd47f" (UID: "70c7db0b-067f-4c18-85c3-2a7cafffd47f"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.837781 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48" (OuterVolumeSpecName: "kube-api-access-g6w48") pod "70c7db0b-067f-4c18-85c3-2a7cafffd47f" (UID: "70c7db0b-067f-4c18-85c3-2a7cafffd47f"). InnerVolumeSpecName "kube-api-access-g6w48". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.932426 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-g6w48\" (UniqueName: \"kubernetes.io/projected/70c7db0b-067f-4c18-85c3-2a7cafffd47f-kube-api-access-g6w48\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.932479 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:19 crc kubenswrapper[4725]: I0120 12:00:19.993227 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "70c7db0b-067f-4c18-85c3-2a7cafffd47f" (UID: "70c7db0b-067f-4c18-85c3-2a7cafffd47f"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.034742 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/70c7db0b-067f-4c18-85c3-2a7cafffd47f-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.249997 4725 generic.go:334] "Generic (PLEG): container finished" podID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" exitCode=0 Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250066 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3"} Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250141 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-c5ntc" event={"ID":"70c7db0b-067f-4c18-85c3-2a7cafffd47f","Type":"ContainerDied","Data":"392e1ebcc5bf26919da577c07af527e8b8dccf334936990b1cd4a156ea61f191"} Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250169 4725 scope.go:117] "RemoveContainer" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.250192 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-c5ntc" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.335599 4725 scope.go:117] "RemoveContainer" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.345131 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.354164 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-c5ntc"] Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.386501 4725 scope.go:117] "RemoveContainer" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.388922 4725 cadvisor_stats_provider.go:516] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70c7db0b_067f_4c18_85c3_2a7cafffd47f.slice/crio-392e1ebcc5bf26919da577c07af527e8b8dccf334936990b1cd4a156ea61f191\": RecentStats: unable to find data in memory cache]" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.405646 4725 scope.go:117] "RemoveContainer" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.406161 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3\": container with ID starting with 7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3 not found: ID does not exist" containerID="7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406209 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3"} err="failed to get container status \"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3\": rpc error: code = NotFound desc = could not find container \"7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3\": container with ID starting with 7a869b1f4daa2db671a113728334225d001eb39cab5d81a159db6238f82388b3 not found: ID does not exist" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406238 4725 scope.go:117] "RemoveContainer" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.406944 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045\": container with ID starting with e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045 not found: ID does not exist" containerID="e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406979 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045"} err="failed to get container status \"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045\": rpc error: code = NotFound desc = could not find container \"e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045\": container with ID starting with e79301da5d8df72fcd2b05fe703ec3f3b94657282860ca79e642a4562fbac045 not found: ID does not exist" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.406995 4725 scope.go:117] "RemoveContainer" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" Jan 20 12:00:20 crc kubenswrapper[4725]: E0120 12:00:20.407299 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383\": container with ID starting with 1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383 not found: ID does not exist" containerID="1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.407326 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383"} err="failed to get container status \"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383\": rpc error: code = NotFound desc = could not find container \"1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383\": container with ID starting with 1079df0375efb4eb9bcd9c38a063b6dc9db6e91c96df100fb0ccd8acade1f383 not found: ID does not exist" Jan 20 12:00:20 crc kubenswrapper[4725]: I0120 12:00:20.942133 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" path="/var/lib/kubelet/pods/70c7db0b-067f-4c18-85c3-2a7cafffd47f/volumes" Jan 20 12:00:28 crc kubenswrapper[4725]: I0120 12:00:28.932686 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:28 crc kubenswrapper[4725]: E0120 12:00:28.933793 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:39 crc kubenswrapper[4725]: I0120 12:00:39.934013 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:39 crc kubenswrapper[4725]: E0120 12:00:39.936941 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:00:51 crc kubenswrapper[4725]: I0120 12:00:51.933245 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:00:51 crc kubenswrapper[4725]: E0120 12:00:51.934335 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:01:03 crc kubenswrapper[4725]: I0120 12:01:03.932898 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:01:03 crc kubenswrapper[4725]: E0120 12:01:03.934614 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:01:16 crc kubenswrapper[4725]: I0120 12:01:16.932414 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:01:16 crc kubenswrapper[4725]: E0120 12:01:16.933369 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:01:27 crc kubenswrapper[4725]: I0120 12:01:27.932683 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:01:29 crc kubenswrapper[4725]: I0120 12:01:29.069638 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831"} Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.463758 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.466560 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-content" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.466735 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-content" Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.466929 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-utilities" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467027 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="extract-utilities" Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.467150 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="2554af70-a48f-4921-a6a6-407016260425" containerName="collect-profiles" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467277 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="2554af70-a48f-4921-a6a6-407016260425" containerName="collect-profiles" Jan 20 12:01:37 crc kubenswrapper[4725]: E0120 12:01:37.467380 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467461 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467824 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="70c7db0b-067f-4c18-85c3-2a7cafffd47f" containerName="registry-server" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.467972 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="2554af70-a48f-4921-a6a6-407016260425" containerName="collect-profiles" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.469515 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.476222 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.666246 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.666422 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.666488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.768618 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.768746 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.768821 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.769293 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.769666 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.793189 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"certified-operators-p4f8v\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:37 crc kubenswrapper[4725]: I0120 12:01:37.800351 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:38 crc kubenswrapper[4725]: I0120 12:01:38.489592 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:39 crc kubenswrapper[4725]: I0120 12:01:39.169747 4725 generic.go:334] "Generic (PLEG): container finished" podID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" exitCode=0 Jan 20 12:01:39 crc kubenswrapper[4725]: I0120 12:01:39.177706 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27"} Jan 20 12:01:39 crc kubenswrapper[4725]: I0120 12:01:39.177858 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerStarted","Data":"913cd3e9113ce71406315493bcb40cc6004d717b2eb1135025136e0800cb3fd7"} Jan 20 12:01:41 crc kubenswrapper[4725]: I0120 12:01:41.191964 4725 generic.go:334] "Generic (PLEG): container finished" podID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" exitCode=0 Jan 20 12:01:41 crc kubenswrapper[4725]: I0120 12:01:41.192032 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46"} Jan 20 12:01:42 crc kubenswrapper[4725]: I0120 12:01:42.204768 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerStarted","Data":"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424"} Jan 20 12:01:42 crc kubenswrapper[4725]: I0120 12:01:42.233993 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-p4f8v" podStartSLOduration=2.766273622 podStartE2EDuration="5.233966144s" podCreationTimestamp="2026-01-20 12:01:37 +0000 UTC" firstStartedPulling="2026-01-20 12:01:39.172277398 +0000 UTC m=+3427.380599371" lastFinishedPulling="2026-01-20 12:01:41.63996992 +0000 UTC m=+3429.848291893" observedRunningTime="2026-01-20 12:01:42.233625255 +0000 UTC m=+3430.441947228" watchObservedRunningTime="2026-01-20 12:01:42.233966144 +0000 UTC m=+3430.442288117" Jan 20 12:01:47 crc kubenswrapper[4725]: I0120 12:01:47.801616 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:47 crc kubenswrapper[4725]: I0120 12:01:47.804520 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:47 crc kubenswrapper[4725]: I0120 12:01:47.848393 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:48 crc kubenswrapper[4725]: I0120 12:01:48.321147 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:48 crc kubenswrapper[4725]: I0120 12:01:48.377978 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.291541 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-p4f8v" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" containerID="cri-o://55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" gracePeriod=2 Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.731598 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.796063 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") pod \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.796198 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") pod \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.796264 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") pod \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\" (UID: \"5dcba88a-7550-4cc6-965c-43ca26a8ac63\") " Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.808270 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities" (OuterVolumeSpecName: "utilities") pod "5dcba88a-7550-4cc6-965c-43ca26a8ac63" (UID: "5dcba88a-7550-4cc6-965c-43ca26a8ac63"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.813736 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk" (OuterVolumeSpecName: "kube-api-access-sdmvk") pod "5dcba88a-7550-4cc6-965c-43ca26a8ac63" (UID: "5dcba88a-7550-4cc6-965c-43ca26a8ac63"). InnerVolumeSpecName "kube-api-access-sdmvk". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.852608 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5dcba88a-7550-4cc6-965c-43ca26a8ac63" (UID: "5dcba88a-7550-4cc6-965c-43ca26a8ac63"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.898570 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.898638 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-sdmvk\" (UniqueName: \"kubernetes.io/projected/5dcba88a-7550-4cc6-965c-43ca26a8ac63-kube-api-access-sdmvk\") on node \"crc\" DevicePath \"\"" Jan 20 12:01:50 crc kubenswrapper[4725]: I0120 12:01:50.898655 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5dcba88a-7550-4cc6-965c-43ca26a8ac63-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306117 4725 generic.go:334] "Generic (PLEG): container finished" podID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" exitCode=0 Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306202 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424"} Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306256 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-p4f8v" event={"ID":"5dcba88a-7550-4cc6-965c-43ca26a8ac63","Type":"ContainerDied","Data":"913cd3e9113ce71406315493bcb40cc6004d717b2eb1135025136e0800cb3fd7"} Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306248 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-p4f8v" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.306280 4725 scope.go:117] "RemoveContainer" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.334966 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.337055 4725 scope.go:117] "RemoveContainer" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.341538 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-p4f8v"] Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.357255 4725 scope.go:117] "RemoveContainer" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.381800 4725 scope.go:117] "RemoveContainer" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" Jan 20 12:01:51 crc kubenswrapper[4725]: E0120 12:01:51.382707 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424\": container with ID starting with 55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424 not found: ID does not exist" containerID="55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.382778 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424"} err="failed to get container status \"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424\": rpc error: code = NotFound desc = could not find container \"55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424\": container with ID starting with 55243acafb96e909d10cc9dd1835b0e437d7c67406a68c95250774d8fee3e424 not found: ID does not exist" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.382827 4725 scope.go:117] "RemoveContainer" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" Jan 20 12:01:51 crc kubenswrapper[4725]: E0120 12:01:51.383579 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46\": container with ID starting with 3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46 not found: ID does not exist" containerID="3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.383639 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46"} err="failed to get container status \"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46\": rpc error: code = NotFound desc = could not find container \"3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46\": container with ID starting with 3e4cf49ca410c8e757f4724cfb34c0e8e483eeffb2418b2307e3620c69ff6f46 not found: ID does not exist" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.383675 4725 scope.go:117] "RemoveContainer" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" Jan 20 12:01:51 crc kubenswrapper[4725]: E0120 12:01:51.384131 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27\": container with ID starting with 6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27 not found: ID does not exist" containerID="6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27" Jan 20 12:01:51 crc kubenswrapper[4725]: I0120 12:01:51.384163 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27"} err="failed to get container status \"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27\": rpc error: code = NotFound desc = could not find container \"6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27\": container with ID starting with 6fdc0b72e67946c65fe68a6fd9f9e4acdec7c7d7defb13b5c1a2b20cac270e27 not found: ID does not exist" Jan 20 12:01:52 crc kubenswrapper[4725]: I0120 12:01:52.944909 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" path="/var/lib/kubelet/pods/5dcba88a-7550-4cc6-965c-43ca26a8ac63/volumes" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.592600 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:46 crc kubenswrapper[4725]: E0120 12:03:46.594539 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-content" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.594593 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-content" Jan 20 12:03:46 crc kubenswrapper[4725]: E0120 12:03:46.594671 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.594892 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" Jan 20 12:03:46 crc kubenswrapper[4725]: E0120 12:03:46.594905 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-utilities" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.594915 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="extract-utilities" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.595186 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5dcba88a-7550-4cc6-965c-43ca26a8ac63" containerName="registry-server" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.597221 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.618775 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.874146 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.874257 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.874317 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.976283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.976815 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.977480 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.977643 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.978543 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:46 crc kubenswrapper[4725]: I0120 12:03:46.999718 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"community-operators-blgnl\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.231003 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.577519 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.789831 4725 generic.go:334] "Generic (PLEG): container finished" podID="129a7977-fd61-4742-94da-f07dcd889975" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" exitCode=0 Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.790293 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85"} Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.790340 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerStarted","Data":"e28e988c48cd24ed974de439a1c86425c5586d960673f8d53fc5d5bf8c75d826"} Jan 20 12:03:47 crc kubenswrapper[4725]: I0120 12:03:47.792339 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:03:48 crc kubenswrapper[4725]: I0120 12:03:48.803515 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerStarted","Data":"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49"} Jan 20 12:03:49 crc kubenswrapper[4725]: I0120 12:03:49.819508 4725 generic.go:334] "Generic (PLEG): container finished" podID="129a7977-fd61-4742-94da-f07dcd889975" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" exitCode=0 Jan 20 12:03:49 crc kubenswrapper[4725]: I0120 12:03:49.819624 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49"} Jan 20 12:03:50 crc kubenswrapper[4725]: I0120 12:03:50.831676 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerStarted","Data":"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64"} Jan 20 12:03:50 crc kubenswrapper[4725]: I0120 12:03:50.856614 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-blgnl" podStartSLOduration=2.230689489 podStartE2EDuration="4.856580732s" podCreationTimestamp="2026-01-20 12:03:46 +0000 UTC" firstStartedPulling="2026-01-20 12:03:47.791934801 +0000 UTC m=+3556.000256784" lastFinishedPulling="2026-01-20 12:03:50.417826054 +0000 UTC m=+3558.626148027" observedRunningTime="2026-01-20 12:03:50.856571732 +0000 UTC m=+3559.064893705" watchObservedRunningTime="2026-01-20 12:03:50.856580732 +0000 UTC m=+3559.064902705" Jan 20 12:03:56 crc kubenswrapper[4725]: I0120 12:03:56.728385 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:03:56 crc kubenswrapper[4725]: I0120 12:03:56.729373 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.236100 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.236154 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.291073 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:57 crc kubenswrapper[4725]: I0120 12:03:57.951915 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:03:58 crc kubenswrapper[4725]: I0120 12:03:58.006474 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.922948 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-blgnl" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" containerID="cri-o://c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" gracePeriod=2 Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.958838 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.960939 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:03:59 crc kubenswrapper[4725]: I0120 12:03:59.971061 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.008532 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"infrawatch-operators-rj8v4\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.109718 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"infrawatch-operators-rj8v4\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.132265 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"infrawatch-operators-rj8v4\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.288174 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.744773 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.792731 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.823116 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") pod \"129a7977-fd61-4742-94da-f07dcd889975\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.823293 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") pod \"129a7977-fd61-4742-94da-f07dcd889975\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.823357 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") pod \"129a7977-fd61-4742-94da-f07dcd889975\" (UID: \"129a7977-fd61-4742-94da-f07dcd889975\") " Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.824530 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities" (OuterVolumeSpecName: "utilities") pod "129a7977-fd61-4742-94da-f07dcd889975" (UID: "129a7977-fd61-4742-94da-f07dcd889975"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.831170 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692" (OuterVolumeSpecName: "kube-api-access-8z692") pod "129a7977-fd61-4742-94da-f07dcd889975" (UID: "129a7977-fd61-4742-94da-f07dcd889975"). InnerVolumeSpecName "kube-api-access-8z692". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.894820 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "129a7977-fd61-4742-94da-f07dcd889975" (UID: "129a7977-fd61-4742-94da-f07dcd889975"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.925581 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8z692\" (UniqueName: \"kubernetes.io/projected/129a7977-fd61-4742-94da-f07dcd889975-kube-api-access-8z692\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.925631 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.925646 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/129a7977-fd61-4742-94da-f07dcd889975-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.942726 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerStarted","Data":"151333e65b36b6a12c5b665a2385edff5beb5e239603dbf24e1de973de8464a5"} Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943558 4725 generic.go:334] "Generic (PLEG): container finished" podID="129a7977-fd61-4742-94da-f07dcd889975" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" exitCode=0 Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943614 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64"} Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943634 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-blgnl" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943659 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-blgnl" event={"ID":"129a7977-fd61-4742-94da-f07dcd889975","Type":"ContainerDied","Data":"e28e988c48cd24ed974de439a1c86425c5586d960673f8d53fc5d5bf8c75d826"} Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.943711 4725 scope.go:117] "RemoveContainer" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.974567 4725 scope.go:117] "RemoveContainer" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.984462 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:04:00 crc kubenswrapper[4725]: I0120 12:04:00.992060 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-blgnl"] Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.006322 4725 scope.go:117] "RemoveContainer" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.027222 4725 scope.go:117] "RemoveContainer" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" Jan 20 12:04:01 crc kubenswrapper[4725]: E0120 12:04:01.029230 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64\": container with ID starting with c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64 not found: ID does not exist" containerID="c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.029299 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64"} err="failed to get container status \"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64\": rpc error: code = NotFound desc = could not find container \"c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64\": container with ID starting with c25a037cb0d554213739100d3c4e9019850f4f06456ba80e8c3b84b1f293fb64 not found: ID does not exist" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.029341 4725 scope.go:117] "RemoveContainer" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" Jan 20 12:04:01 crc kubenswrapper[4725]: E0120 12:04:01.029968 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49\": container with ID starting with 584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49 not found: ID does not exist" containerID="584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.030007 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49"} err="failed to get container status \"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49\": rpc error: code = NotFound desc = could not find container \"584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49\": container with ID starting with 584746dcd4d6c7d4426e652a0c0933b84532fecd85a7c82333762971cb76bb49 not found: ID does not exist" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.030038 4725 scope.go:117] "RemoveContainer" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" Jan 20 12:04:01 crc kubenswrapper[4725]: E0120 12:04:01.030536 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85\": container with ID starting with 7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85 not found: ID does not exist" containerID="7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.030616 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85"} err="failed to get container status \"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85\": rpc error: code = NotFound desc = could not find container \"7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85\": container with ID starting with 7d66c732db14bdc6b0e08d4bf49960c1d7fc1c029d20ae903c19d2a030d03c85 not found: ID does not exist" Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.952602 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerStarted","Data":"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291"} Jan 20 12:04:01 crc kubenswrapper[4725]: I0120 12:04:01.976150 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-rj8v4" podStartSLOduration=2.831511676 podStartE2EDuration="2.976124867s" podCreationTimestamp="2026-01-20 12:03:59 +0000 UTC" firstStartedPulling="2026-01-20 12:04:00.750199171 +0000 UTC m=+3568.958521144" lastFinishedPulling="2026-01-20 12:04:00.894812362 +0000 UTC m=+3569.103134335" observedRunningTime="2026-01-20 12:04:01.969903281 +0000 UTC m=+3570.178225254" watchObservedRunningTime="2026-01-20 12:04:01.976124867 +0000 UTC m=+3570.184446840" Jan 20 12:04:02 crc kubenswrapper[4725]: I0120 12:04:02.951670 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="129a7977-fd61-4742-94da-f07dcd889975" path="/var/lib/kubelet/pods/129a7977-fd61-4742-94da-f07dcd889975/volumes" Jan 20 12:04:10 crc kubenswrapper[4725]: I0120 12:04:10.289317 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:10 crc kubenswrapper[4725]: I0120 12:04:10.290145 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:10 crc kubenswrapper[4725]: I0120 12:04:10.330764 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:11 crc kubenswrapper[4725]: I0120 12:04:11.076194 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:11 crc kubenswrapper[4725]: I0120 12:04:11.532813 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.042137 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-rj8v4" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" containerID="cri-o://2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" gracePeriod=2 Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.447435 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.579747 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") pod \"3ea261e4-31a5-47f1-b7da-585da56b41fd\" (UID: \"3ea261e4-31a5-47f1-b7da-585da56b41fd\") " Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.588949 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt" (OuterVolumeSpecName: "kube-api-access-ns6mt") pod "3ea261e4-31a5-47f1-b7da-585da56b41fd" (UID: "3ea261e4-31a5-47f1-b7da-585da56b41fd"). InnerVolumeSpecName "kube-api-access-ns6mt". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:04:13 crc kubenswrapper[4725]: I0120 12:04:13.681600 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-ns6mt\" (UniqueName: \"kubernetes.io/projected/3ea261e4-31a5-47f1-b7da-585da56b41fd-kube-api-access-ns6mt\") on node \"crc\" DevicePath \"\"" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.053787 4725 generic.go:334] "Generic (PLEG): container finished" podID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" exitCode=0 Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.053906 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerDied","Data":"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291"} Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.053977 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-rj8v4" event={"ID":"3ea261e4-31a5-47f1-b7da-585da56b41fd","Type":"ContainerDied","Data":"151333e65b36b6a12c5b665a2385edff5beb5e239603dbf24e1de973de8464a5"} Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.054008 4725 scope.go:117] "RemoveContainer" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.054308 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-rj8v4" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.079793 4725 scope.go:117] "RemoveContainer" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" Jan 20 12:04:14 crc kubenswrapper[4725]: E0120 12:04:14.080586 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291\": container with ID starting with 2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291 not found: ID does not exist" containerID="2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.080662 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291"} err="failed to get container status \"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291\": rpc error: code = NotFound desc = could not find container \"2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291\": container with ID starting with 2905b47917f4547148bf4197eb2e620f04f5ff08e18fe5383e108e483ea07291 not found: ID does not exist" Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.103277 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.109626 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-rj8v4"] Jan 20 12:04:14 crc kubenswrapper[4725]: I0120 12:04:14.942238 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" path="/var/lib/kubelet/pods/3ea261e4-31a5-47f1-b7da-585da56b41fd/volumes" Jan 20 12:04:26 crc kubenswrapper[4725]: I0120 12:04:26.727745 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:04:26 crc kubenswrapper[4725]: I0120 12:04:26.728693 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.728488 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.729420 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.729496 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.730458 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:04:56 crc kubenswrapper[4725]: I0120 12:04:56.730535 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831" gracePeriod=600 Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.487435 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831" exitCode=0 Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.487531 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831"} Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.488253 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b"} Jan 20 12:04:57 crc kubenswrapper[4725]: I0120 12:04:57.488316 4725 scope.go:117] "RemoveContainer" containerID="5ab01f33411fb5c1051924fe375ffa01c5576ba3d2415f78561033bb2318822a" Jan 20 12:07:26 crc kubenswrapper[4725]: I0120 12:07:26.728384 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:07:26 crc kubenswrapper[4725]: I0120 12:07:26.731953 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:07:56 crc kubenswrapper[4725]: I0120 12:07:56.727858 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:07:56 crc kubenswrapper[4725]: I0120 12:07:56.728529 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.728753 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.729549 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.729665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.731182 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:08:26 crc kubenswrapper[4725]: I0120 12:08:26.731283 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" gracePeriod=600 Jan 20 12:08:26 crc kubenswrapper[4725]: E0120 12:08:26.858721 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.571762 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" exitCode=0 Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.571817 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b"} Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.572285 4725 scope.go:117] "RemoveContainer" containerID="3414b9d2e27158720e99db24bacd938ad4eb7fdbcbbf9083ad3088cf697a9831" Jan 20 12:08:27 crc kubenswrapper[4725]: I0120 12:08:27.573214 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:08:27 crc kubenswrapper[4725]: E0120 12:08:27.575931 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:08:39 crc kubenswrapper[4725]: I0120 12:08:39.932384 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:08:39 crc kubenswrapper[4725]: E0120 12:08:39.933557 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:08:50 crc kubenswrapper[4725]: I0120 12:08:50.933580 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:08:50 crc kubenswrapper[4725]: E0120 12:08:50.934755 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.425182 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426680 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-utilities" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426708 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-utilities" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426755 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426767 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426789 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426803 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.426827 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-content" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.426838 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="extract-content" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.427121 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3ea261e4-31a5-47f1-b7da-585da56b41fd" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.427154 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="129a7977-fd61-4742-94da-f07dcd889975" containerName="registry-server" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.428329 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.437322 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.546947 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"infrawatch-operators-2ttwk\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.650570 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"infrawatch-operators-2ttwk\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.673415 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"infrawatch-operators-2ttwk\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.751355 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:05 crc kubenswrapper[4725]: I0120 12:09:05.932360 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:05 crc kubenswrapper[4725]: E0120 12:09:05.933175 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.067653 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.083206 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.983292 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerStarted","Data":"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee"} Jan 20 12:09:06 crc kubenswrapper[4725]: I0120 12:09:06.983369 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerStarted","Data":"fd4ba608abad54884b78044eb3da74fc1b2260422eff0a852105597c3a216ab8"} Jan 20 12:09:07 crc kubenswrapper[4725]: I0120 12:09:07.009795 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-2ttwk" podStartSLOduration=1.867875263 podStartE2EDuration="2.009733537s" podCreationTimestamp="2026-01-20 12:09:05 +0000 UTC" firstStartedPulling="2026-01-20 12:09:06.082865663 +0000 UTC m=+3874.291187636" lastFinishedPulling="2026-01-20 12:09:06.224723937 +0000 UTC m=+3874.433045910" observedRunningTime="2026-01-20 12:09:07.00348409 +0000 UTC m=+3875.211806073" watchObservedRunningTime="2026-01-20 12:09:07.009733537 +0000 UTC m=+3875.218055520" Jan 20 12:09:15 crc kubenswrapper[4725]: I0120 12:09:15.751942 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:15 crc kubenswrapper[4725]: I0120 12:09:15.754567 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:15 crc kubenswrapper[4725]: I0120 12:09:15.797336 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:16 crc kubenswrapper[4725]: I0120 12:09:16.118558 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:16 crc kubenswrapper[4725]: I0120 12:09:16.175717 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.094692 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-2ttwk" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" containerID="cri-o://df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" gracePeriod=2 Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.782245 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.964432 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") pod \"c04dbdce-d40b-4ab6-a770-29307869c23c\" (UID: \"c04dbdce-d40b-4ab6-a770-29307869c23c\") " Jan 20 12:09:18 crc kubenswrapper[4725]: I0120 12:09:18.976208 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8" (OuterVolumeSpecName: "kube-api-access-x98n8") pod "c04dbdce-d40b-4ab6-a770-29307869c23c" (UID: "c04dbdce-d40b-4ab6-a770-29307869c23c"). InnerVolumeSpecName "kube-api-access-x98n8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.066234 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-x98n8\" (UniqueName: \"kubernetes.io/projected/c04dbdce-d40b-4ab6-a770-29307869c23c-kube-api-access-x98n8\") on node \"crc\" DevicePath \"\"" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106269 4725 generic.go:334] "Generic (PLEG): container finished" podID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" exitCode=0 Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106339 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerDied","Data":"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee"} Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106349 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-2ttwk" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106388 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-2ttwk" event={"ID":"c04dbdce-d40b-4ab6-a770-29307869c23c","Type":"ContainerDied","Data":"fd4ba608abad54884b78044eb3da74fc1b2260422eff0a852105597c3a216ab8"} Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.106414 4725 scope.go:117] "RemoveContainer" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.144453 4725 scope.go:117] "RemoveContainer" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" Jan 20 12:09:19 crc kubenswrapper[4725]: E0120 12:09:19.145093 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee\": container with ID starting with df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee not found: ID does not exist" containerID="df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.145136 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee"} err="failed to get container status \"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee\": rpc error: code = NotFound desc = could not find container \"df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee\": container with ID starting with df59315b38b0785979f369a3bcf86b797414bb3e9be402d404c652c5eef263ee not found: ID does not exist" Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.158924 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.169272 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-2ttwk"] Jan 20 12:09:19 crc kubenswrapper[4725]: I0120 12:09:19.932750 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:19 crc kubenswrapper[4725]: E0120 12:09:19.933151 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:20 crc kubenswrapper[4725]: I0120 12:09:20.954938 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" path="/var/lib/kubelet/pods/c04dbdce-d40b-4ab6-a770-29307869c23c/volumes" Jan 20 12:09:32 crc kubenswrapper[4725]: I0120 12:09:32.938553 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:32 crc kubenswrapper[4725]: E0120 12:09:32.941769 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:44 crc kubenswrapper[4725]: I0120 12:09:44.932723 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:44 crc kubenswrapper[4725]: E0120 12:09:44.933943 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:09:58 crc kubenswrapper[4725]: I0120 12:09:58.932569 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:09:58 crc kubenswrapper[4725]: E0120 12:09:58.933649 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.383687 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:04 crc kubenswrapper[4725]: E0120 12:10:04.385179 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.385206 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.385477 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c04dbdce-d40b-4ab6-a770-29307869c23c" containerName="registry-server" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.387438 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.629464 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.731540 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.731603 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.731644 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.835283 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.835748 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.835796 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.836285 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.836681 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.863131 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"redhat-operators-jtnxl\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:04 crc kubenswrapper[4725]: I0120 12:10:04.953925 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.222891 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.654192 4725 generic.go:334] "Generic (PLEG): container finished" podID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerID="029968eb55c812507bab83444b4f5735976f6601188235dc475e69bc38de138d" exitCode=0 Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.654248 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"029968eb55c812507bab83444b4f5735976f6601188235dc475e69bc38de138d"} Jan 20 12:10:05 crc kubenswrapper[4725]: I0120 12:10:05.654278 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerStarted","Data":"fff4b06d60c6f765cf9c65a26e4e50c9347c82f2284687cae5f1eaf97ae21b3a"} Jan 20 12:10:06 crc kubenswrapper[4725]: I0120 12:10:06.668732 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerStarted","Data":"6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4"} Jan 20 12:10:08 crc kubenswrapper[4725]: I0120 12:10:08.692485 4725 generic.go:334] "Generic (PLEG): container finished" podID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerID="6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4" exitCode=0 Jan 20 12:10:08 crc kubenswrapper[4725]: I0120 12:10:08.692573 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4"} Jan 20 12:10:09 crc kubenswrapper[4725]: I0120 12:10:09.706060 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerStarted","Data":"735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605"} Jan 20 12:10:13 crc kubenswrapper[4725]: I0120 12:10:13.932717 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:13 crc kubenswrapper[4725]: E0120 12:10:13.933660 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:14 crc kubenswrapper[4725]: I0120 12:10:14.954490 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:14 crc kubenswrapper[4725]: I0120 12:10:14.955726 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:16 crc kubenswrapper[4725]: I0120 12:10:16.028237 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jtnxl" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" probeResult="failure" output=< Jan 20 12:10:16 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 12:10:16 crc kubenswrapper[4725]: > Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.006135 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.044159 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jtnxl" podStartSLOduration=17.396799168 podStartE2EDuration="21.044129307s" podCreationTimestamp="2026-01-20 12:10:04 +0000 UTC" firstStartedPulling="2026-01-20 12:10:05.656212972 +0000 UTC m=+3933.864534945" lastFinishedPulling="2026-01-20 12:10:09.303543111 +0000 UTC m=+3937.511865084" observedRunningTime="2026-01-20 12:10:09.738424748 +0000 UTC m=+3937.946746721" watchObservedRunningTime="2026-01-20 12:10:25.044129307 +0000 UTC m=+3953.252451290" Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.066956 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:25 crc kubenswrapper[4725]: I0120 12:10:25.262581 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:26 crc kubenswrapper[4725]: I0120 12:10:26.362706 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jtnxl" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" containerID="cri-o://735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605" gracePeriod=2 Jan 20 12:10:27 crc kubenswrapper[4725]: I0120 12:10:27.375993 4725 generic.go:334] "Generic (PLEG): container finished" podID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerID="735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605" exitCode=0 Jan 20 12:10:27 crc kubenswrapper[4725]: I0120 12:10:27.376193 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605"} Jan 20 12:10:27 crc kubenswrapper[4725]: I0120 12:10:27.965717 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.150026 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") pod \"149c1b55-088a-4bd8-beaf-ca554aefa16c\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.150128 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") pod \"149c1b55-088a-4bd8-beaf-ca554aefa16c\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.150372 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") pod \"149c1b55-088a-4bd8-beaf-ca554aefa16c\" (UID: \"149c1b55-088a-4bd8-beaf-ca554aefa16c\") " Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.151799 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities" (OuterVolumeSpecName: "utilities") pod "149c1b55-088a-4bd8-beaf-ca554aefa16c" (UID: "149c1b55-088a-4bd8-beaf-ca554aefa16c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.174239 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7" (OuterVolumeSpecName: "kube-api-access-qrqc7") pod "149c1b55-088a-4bd8-beaf-ca554aefa16c" (UID: "149c1b55-088a-4bd8-beaf-ca554aefa16c"). InnerVolumeSpecName "kube-api-access-qrqc7". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.252316 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.252368 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qrqc7\" (UniqueName: \"kubernetes.io/projected/149c1b55-088a-4bd8-beaf-ca554aefa16c-kube-api-access-qrqc7\") on node \"crc\" DevicePath \"\"" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.280699 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149c1b55-088a-4bd8-beaf-ca554aefa16c" (UID: "149c1b55-088a-4bd8-beaf-ca554aefa16c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.354222 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149c1b55-088a-4bd8-beaf-ca554aefa16c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.385904 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jtnxl" event={"ID":"149c1b55-088a-4bd8-beaf-ca554aefa16c","Type":"ContainerDied","Data":"fff4b06d60c6f765cf9c65a26e4e50c9347c82f2284687cae5f1eaf97ae21b3a"} Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.385976 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jtnxl" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.385977 4725 scope.go:117] "RemoveContainer" containerID="735f5d970f8d5a1c45fbe47087952f9fd27bd25b66b262589ddc6d4b423b0605" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.406370 4725 scope.go:117] "RemoveContainer" containerID="6780ca67cee689c089acf7445a9626281fb53914e78dd7fd07b93a6187e7bde4" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.439301 4725 scope.go:117] "RemoveContainer" containerID="029968eb55c812507bab83444b4f5735976f6601188235dc475e69bc38de138d" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.444944 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.451362 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jtnxl"] Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.932602 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:28 crc kubenswrapper[4725]: E0120 12:10:28.933046 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:28 crc kubenswrapper[4725]: I0120 12:10:28.943285 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" path="/var/lib/kubelet/pods/149c1b55-088a-4bd8-beaf-ca554aefa16c/volumes" Jan 20 12:10:39 crc kubenswrapper[4725]: I0120 12:10:39.933321 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:39 crc kubenswrapper[4725]: E0120 12:10:39.934497 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:10:51 crc kubenswrapper[4725]: I0120 12:10:51.931907 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:10:51 crc kubenswrapper[4725]: E0120 12:10:51.932898 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:02 crc kubenswrapper[4725]: I0120 12:11:02.936577 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:02 crc kubenswrapper[4725]: E0120 12:11:02.937519 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:14 crc kubenswrapper[4725]: I0120 12:11:14.932214 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:14 crc kubenswrapper[4725]: E0120 12:11:14.933645 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:28 crc kubenswrapper[4725]: I0120 12:11:28.933749 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:28 crc kubenswrapper[4725]: E0120 12:11:28.934699 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:42 crc kubenswrapper[4725]: I0120 12:11:42.941540 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:42 crc kubenswrapper[4725]: E0120 12:11:42.942791 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.265538 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:11:49 crc kubenswrapper[4725]: E0120 12:11:49.266693 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-utilities" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266711 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-utilities" Jan 20 12:11:49 crc kubenswrapper[4725]: E0120 12:11:49.266731 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266740 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" Jan 20 12:11:49 crc kubenswrapper[4725]: E0120 12:11:49.266761 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-content" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266768 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="extract-content" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.266921 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="149c1b55-088a-4bd8-beaf-ca554aefa16c" containerName="registry-server" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.268014 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.284327 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.338243 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.338522 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.338698 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.440371 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.440993 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.441189 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.441255 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.441615 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.474866 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"certified-operators-5m7sx\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:49 crc kubenswrapper[4725]: I0120 12:11:49.594208 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:50 crc kubenswrapper[4725]: I0120 12:11:50.107935 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:11:50 crc kubenswrapper[4725]: I0120 12:11:50.253627 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerStarted","Data":"b48ca1fa046c8e287de985bb17fb7828ccd59181d6510c4af368e691e4a7eb94"} Jan 20 12:11:51 crc kubenswrapper[4725]: I0120 12:11:51.266443 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerID="12fc0d4b7a6b440d05aae65bbaf75415b33cc1b772ffbbdf7c18502d8fa4db78" exitCode=0 Jan 20 12:11:51 crc kubenswrapper[4725]: I0120 12:11:51.266536 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"12fc0d4b7a6b440d05aae65bbaf75415b33cc1b772ffbbdf7c18502d8fa4db78"} Jan 20 12:11:53 crc kubenswrapper[4725]: I0120 12:11:53.306879 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerID="b11d2d8a1b0606ecc18cd1499a12a7672ace55137edbf153607ef35e8279f66f" exitCode=0 Jan 20 12:11:53 crc kubenswrapper[4725]: I0120 12:11:53.306962 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"b11d2d8a1b0606ecc18cd1499a12a7672ace55137edbf153607ef35e8279f66f"} Jan 20 12:11:54 crc kubenswrapper[4725]: I0120 12:11:54.932514 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:11:54 crc kubenswrapper[4725]: E0120 12:11:54.933260 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:11:55 crc kubenswrapper[4725]: I0120 12:11:55.330622 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerStarted","Data":"fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4"} Jan 20 12:11:55 crc kubenswrapper[4725]: I0120 12:11:55.366421 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-5m7sx" podStartSLOduration=3.127371095 podStartE2EDuration="6.36639657s" podCreationTimestamp="2026-01-20 12:11:49 +0000 UTC" firstStartedPulling="2026-01-20 12:11:51.273416221 +0000 UTC m=+4039.481738214" lastFinishedPulling="2026-01-20 12:11:54.512441716 +0000 UTC m=+4042.720763689" observedRunningTime="2026-01-20 12:11:55.361620689 +0000 UTC m=+4043.569942682" watchObservedRunningTime="2026-01-20 12:11:55.36639657 +0000 UTC m=+4043.574718553" Jan 20 12:11:59 crc kubenswrapper[4725]: I0120 12:11:59.594907 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:59 crc kubenswrapper[4725]: I0120 12:11:59.595632 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:11:59 crc kubenswrapper[4725]: I0120 12:11:59.642196 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:00 crc kubenswrapper[4725]: I0120 12:12:00.445249 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:00 crc kubenswrapper[4725]: I0120 12:12:00.513202 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:12:02 crc kubenswrapper[4725]: I0120 12:12:02.421686 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-5m7sx" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" containerID="cri-o://fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4" gracePeriod=2 Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.432821 4725 generic.go:334] "Generic (PLEG): container finished" podID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerID="fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4" exitCode=0 Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.432900 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4"} Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.433575 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-5m7sx" event={"ID":"5f43a5ae-ed9d-43b3-9729-5c1110c63635","Type":"ContainerDied","Data":"b48ca1fa046c8e287de985bb17fb7828ccd59181d6510c4af368e691e4a7eb94"} Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.433610 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b48ca1fa046c8e287de985bb17fb7828ccd59181d6510c4af368e691e4a7eb94" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.475466 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.539336 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") pod \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.539402 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") pod \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.539491 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") pod \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\" (UID: \"5f43a5ae-ed9d-43b3-9729-5c1110c63635\") " Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.543220 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities" (OuterVolumeSpecName: "utilities") pod "5f43a5ae-ed9d-43b3-9729-5c1110c63635" (UID: "5f43a5ae-ed9d-43b3-9729-5c1110c63635"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.560408 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq" (OuterVolumeSpecName: "kube-api-access-77qvq") pod "5f43a5ae-ed9d-43b3-9729-5c1110c63635" (UID: "5f43a5ae-ed9d-43b3-9729-5c1110c63635"). InnerVolumeSpecName "kube-api-access-77qvq". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.592956 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5f43a5ae-ed9d-43b3-9729-5c1110c63635" (UID: "5f43a5ae-ed9d-43b3-9729-5c1110c63635"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.641876 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.641930 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-77qvq\" (UniqueName: \"kubernetes.io/projected/5f43a5ae-ed9d-43b3-9729-5c1110c63635-kube-api-access-77qvq\") on node \"crc\" DevicePath \"\"" Jan 20 12:12:03 crc kubenswrapper[4725]: I0120 12:12:03.641949 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5f43a5ae-ed9d-43b3-9729-5c1110c63635-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.442326 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-5m7sx" Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.480561 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.488906 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-5m7sx"] Jan 20 12:12:04 crc kubenswrapper[4725]: I0120 12:12:04.951885 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" path="/var/lib/kubelet/pods/5f43a5ae-ed9d-43b3-9729-5c1110c63635/volumes" Jan 20 12:12:07 crc kubenswrapper[4725]: I0120 12:12:07.934895 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:07 crc kubenswrapper[4725]: E0120 12:12:07.935359 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:19 crc kubenswrapper[4725]: I0120 12:12:19.934367 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:19 crc kubenswrapper[4725]: E0120 12:12:19.935710 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:30 crc kubenswrapper[4725]: I0120 12:12:30.932938 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:30 crc kubenswrapper[4725]: E0120 12:12:30.933892 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:43 crc kubenswrapper[4725]: I0120 12:12:43.933760 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:43 crc kubenswrapper[4725]: E0120 12:12:43.935205 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:12:54 crc kubenswrapper[4725]: I0120 12:12:54.932914 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:12:54 crc kubenswrapper[4725]: E0120 12:12:54.933910 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:13:08 crc kubenswrapper[4725]: I0120 12:13:08.937130 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:13:08 crc kubenswrapper[4725]: E0120 12:13:08.938266 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:13:23 crc kubenswrapper[4725]: I0120 12:13:23.050702 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:13:23 crc kubenswrapper[4725]: E0120 12:13:23.052233 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:13:36 crc kubenswrapper[4725]: I0120 12:13:36.932444 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:13:37 crc kubenswrapper[4725]: I0120 12:13:37.239521 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6"} Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.509685 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:13:50 crc kubenswrapper[4725]: E0120 12:13:50.510985 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-content" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511018 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-content" Jan 20 12:13:50 crc kubenswrapper[4725]: E0120 12:13:50.511048 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511060 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" Jan 20 12:13:50 crc kubenswrapper[4725]: E0120 12:13:50.511104 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-utilities" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511114 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="extract-utilities" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.511301 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="5f43a5ae-ed9d-43b3-9729-5c1110c63635" containerName="registry-server" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.512504 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.522992 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.697599 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.697702 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.697726 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.799528 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.799613 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.799635 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.800336 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.800948 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.823345 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"community-operators-z6446\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:50 crc kubenswrapper[4725]: I0120 12:13:50.844769 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:13:51 crc kubenswrapper[4725]: I0120 12:13:51.539889 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:13:51 crc kubenswrapper[4725]: W0120 12:13:51.546893 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podc2ef7efe_4c79_4017_903c_aa5ecb307df0.slice/crio-4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85 WatchSource:0}: Error finding container 4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85: Status 404 returned error can't find the container with id 4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85 Jan 20 12:13:52 crc kubenswrapper[4725]: I0120 12:13:52.551840 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerID="8b650c3f884771f6b8012af8c700a2a9c63c439a2436778c0694ae94e31d1bf3" exitCode=0 Jan 20 12:13:52 crc kubenswrapper[4725]: I0120 12:13:52.552051 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"8b650c3f884771f6b8012af8c700a2a9c63c439a2436778c0694ae94e31d1bf3"} Jan 20 12:13:52 crc kubenswrapper[4725]: I0120 12:13:52.552250 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerStarted","Data":"4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85"} Jan 20 12:13:53 crc kubenswrapper[4725]: I0120 12:13:53.563665 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerStarted","Data":"e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033"} Jan 20 12:13:54 crc kubenswrapper[4725]: I0120 12:13:54.574843 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerID="e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033" exitCode=0 Jan 20 12:13:54 crc kubenswrapper[4725]: I0120 12:13:54.574910 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033"} Jan 20 12:13:55 crc kubenswrapper[4725]: I0120 12:13:55.584281 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerStarted","Data":"4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49"} Jan 20 12:13:55 crc kubenswrapper[4725]: I0120 12:13:55.800810 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-z6446" podStartSLOduration=3.32266467 podStartE2EDuration="5.800777118s" podCreationTimestamp="2026-01-20 12:13:50 +0000 UTC" firstStartedPulling="2026-01-20 12:13:52.554387721 +0000 UTC m=+4160.762709694" lastFinishedPulling="2026-01-20 12:13:55.032500169 +0000 UTC m=+4163.240822142" observedRunningTime="2026-01-20 12:13:55.797954829 +0000 UTC m=+4164.006276842" watchObservedRunningTime="2026-01-20 12:13:55.800777118 +0000 UTC m=+4164.009099091" Jan 20 12:14:00 crc kubenswrapper[4725]: I0120 12:14:00.846025 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:00 crc kubenswrapper[4725]: I0120 12:14:00.846583 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:00 crc kubenswrapper[4725]: I0120 12:14:00.948120 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:01 crc kubenswrapper[4725]: I0120 12:14:01.714523 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:01 crc kubenswrapper[4725]: I0120 12:14:01.785012 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:14:03 crc kubenswrapper[4725]: I0120 12:14:03.670194 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-z6446" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" containerID="cri-o://4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49" gracePeriod=2 Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.687814 4725 generic.go:334] "Generic (PLEG): container finished" podID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerID="4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49" exitCode=0 Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.688062 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49"} Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.688263 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-z6446" event={"ID":"c2ef7efe-4c79-4017-903c-aa5ecb307df0","Type":"ContainerDied","Data":"4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85"} Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.688288 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4f32093d3b01bd134a8a07fef6bd73609c7cbb46a53ebbd3d6e941ffd66fec85" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.717701 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.816043 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") pod \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.816150 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") pod \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.816337 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") pod \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\" (UID: \"c2ef7efe-4c79-4017-903c-aa5ecb307df0\") " Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.817416 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities" (OuterVolumeSpecName: "utilities") pod "c2ef7efe-4c79-4017-903c-aa5ecb307df0" (UID: "c2ef7efe-4c79-4017-903c-aa5ecb307df0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.833520 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz" (OuterVolumeSpecName: "kube-api-access-hlfwz") pod "c2ef7efe-4c79-4017-903c-aa5ecb307df0" (UID: "c2ef7efe-4c79-4017-903c-aa5ecb307df0"). InnerVolumeSpecName "kube-api-access-hlfwz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.895995 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "c2ef7efe-4c79-4017-903c-aa5ecb307df0" (UID: "c2ef7efe-4c79-4017-903c-aa5ecb307df0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.918919 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.918963 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-hlfwz\" (UniqueName: \"kubernetes.io/projected/c2ef7efe-4c79-4017-903c-aa5ecb307df0-kube-api-access-hlfwz\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:04 crc kubenswrapper[4725]: I0120 12:14:04.918974 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/c2ef7efe-4c79-4017-903c-aa5ecb307df0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:05 crc kubenswrapper[4725]: I0120 12:14:05.694487 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-z6446" Jan 20 12:14:05 crc kubenswrapper[4725]: I0120 12:14:05.724407 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:14:05 crc kubenswrapper[4725]: I0120 12:14:05.734702 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-z6446"] Jan 20 12:14:06 crc kubenswrapper[4725]: I0120 12:14:06.943640 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" path="/var/lib/kubelet/pods/c2ef7efe-4c79-4017-903c-aa5ecb307df0/volumes" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.282685 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:37 crc kubenswrapper[4725]: E0120 12:14:37.283902 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-utilities" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.283923 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-utilities" Jan 20 12:14:37 crc kubenswrapper[4725]: E0120 12:14:37.283946 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.283954 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" Jan 20 12:14:37 crc kubenswrapper[4725]: E0120 12:14:37.283978 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-content" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.283987 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="extract-content" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.284216 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="c2ef7efe-4c79-4017-903c-aa5ecb307df0" containerName="registry-server" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.284959 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.291480 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.430440 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"infrawatch-operators-pcqpc\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.532457 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"infrawatch-operators-pcqpc\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.561117 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"infrawatch-operators-pcqpc\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.649858 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.923631 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:37 crc kubenswrapper[4725]: I0120 12:14:37.936447 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:14:38 crc kubenswrapper[4725]: I0120 12:14:38.057674 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerStarted","Data":"5d4ca42ad1acab36b21f0c6b0dc950eb93553276b1ffe4509637de1202cc10fa"} Jan 20 12:14:39 crc kubenswrapper[4725]: I0120 12:14:39.070466 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerStarted","Data":"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91"} Jan 20 12:14:39 crc kubenswrapper[4725]: I0120 12:14:39.096468 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-pcqpc" podStartSLOduration=1.8763073590000001 podStartE2EDuration="2.096423194s" podCreationTimestamp="2026-01-20 12:14:37 +0000 UTC" firstStartedPulling="2026-01-20 12:14:37.936131586 +0000 UTC m=+4206.144453559" lastFinishedPulling="2026-01-20 12:14:38.156247421 +0000 UTC m=+4206.364569394" observedRunningTime="2026-01-20 12:14:39.090615421 +0000 UTC m=+4207.298937424" watchObservedRunningTime="2026-01-20 12:14:39.096423194 +0000 UTC m=+4207.304745207" Jan 20 12:14:47 crc kubenswrapper[4725]: I0120 12:14:47.651728 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:47 crc kubenswrapper[4725]: I0120 12:14:47.652747 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:47 crc kubenswrapper[4725]: I0120 12:14:47.713636 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:48 crc kubenswrapper[4725]: I0120 12:14:48.228734 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:49 crc kubenswrapper[4725]: I0120 12:14:49.025017 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.205423 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-pcqpc" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" containerID="cri-o://6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" gracePeriod=2 Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.635950 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.800869 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") pod \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\" (UID: \"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893\") " Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.823489 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w" (OuterVolumeSpecName: "kube-api-access-8mc9w") pod "3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" (UID: "3afa6bc5-f864-43f4-9eb4-a7dbc8de5893"). InnerVolumeSpecName "kube-api-access-8mc9w". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:14:50 crc kubenswrapper[4725]: I0120 12:14:50.904038 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-8mc9w\" (UniqueName: \"kubernetes.io/projected/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893-kube-api-access-8mc9w\") on node \"crc\" DevicePath \"\"" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218659 4725 generic.go:334] "Generic (PLEG): container finished" podID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" exitCode=0 Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218736 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerDied","Data":"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91"} Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218819 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-pcqpc" event={"ID":"3afa6bc5-f864-43f4-9eb4-a7dbc8de5893","Type":"ContainerDied","Data":"5d4ca42ad1acab36b21f0c6b0dc950eb93553276b1ffe4509637de1202cc10fa"} Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.218842 4725 scope.go:117] "RemoveContainer" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.221005 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-pcqpc" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.256146 4725 scope.go:117] "RemoveContainer" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" Jan 20 12:14:51 crc kubenswrapper[4725]: E0120 12:14:51.258945 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91\": container with ID starting with 6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91 not found: ID does not exist" containerID="6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.259338 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91"} err="failed to get container status \"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91\": rpc error: code = NotFound desc = could not find container \"6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91\": container with ID starting with 6f1f18d0f69625c910770d7995b6732911b80b75dbd1f72c7e20e3152bf17e91 not found: ID does not exist" Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.269534 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:51 crc kubenswrapper[4725]: I0120 12:14:51.278003 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-pcqpc"] Jan 20 12:14:52 crc kubenswrapper[4725]: I0120 12:14:52.949182 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" path="/var/lib/kubelet/pods/3afa6bc5-f864-43f4-9eb4-a7dbc8de5893/volumes" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.199611 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl"] Jan 20 12:15:00 crc kubenswrapper[4725]: E0120 12:15:00.203499 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.203519 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.203677 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="3afa6bc5-f864-43f4-9eb4-a7dbc8de5893" containerName="registry-server" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.204269 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.206572 4725 reflector.go:368] Caches populated for *v1.ConfigMap from object-"openshift-operator-lifecycle-manager"/"collect-profiles-config" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.207222 4725 reflector.go:368] Caches populated for *v1.Secret from object-"openshift-operator-lifecycle-manager"/"collect-profiles-dockercfg-kzf4t" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.217519 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl"] Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.294474 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.294607 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.294677 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.396266 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.396351 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.396462 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.398315 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.414229 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.418412 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"collect-profiles-29481855-bbzhl\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.523689 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:00 crc kubenswrapper[4725]: I0120 12:15:00.821942 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl"] Jan 20 12:15:01 crc kubenswrapper[4725]: I0120 12:15:01.337058 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerStarted","Data":"fa4cf535d5e81a4cf0ea0b637ecfa36dfafb70bf14c9057ee7b5f5e6043e358e"} Jan 20 12:15:01 crc kubenswrapper[4725]: I0120 12:15:01.337140 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerStarted","Data":"917d494b66019293ca66267c446d95a9639ed0de12bcb3eac631abc66f0d47a7"} Jan 20 12:15:01 crc kubenswrapper[4725]: I0120 12:15:01.377646 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" podStartSLOduration=1.377607866 podStartE2EDuration="1.377607866s" podCreationTimestamp="2026-01-20 12:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 12:15:01.36156985 +0000 UTC m=+4229.569891833" watchObservedRunningTime="2026-01-20 12:15:01.377607866 +0000 UTC m=+4229.585929849" Jan 20 12:15:02 crc kubenswrapper[4725]: I0120 12:15:02.345627 4725 generic.go:334] "Generic (PLEG): container finished" podID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerID="fa4cf535d5e81a4cf0ea0b637ecfa36dfafb70bf14c9057ee7b5f5e6043e358e" exitCode=0 Jan 20 12:15:02 crc kubenswrapper[4725]: I0120 12:15:02.345753 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerDied","Data":"fa4cf535d5e81a4cf0ea0b637ecfa36dfafb70bf14c9057ee7b5f5e6043e358e"} Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.622306 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.757052 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") pod \"604a5ea1-fb17-44e8-9c63-30238fdea94d\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.757179 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") pod \"604a5ea1-fb17-44e8-9c63-30238fdea94d\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.757226 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") pod \"604a5ea1-fb17-44e8-9c63-30238fdea94d\" (UID: \"604a5ea1-fb17-44e8-9c63-30238fdea94d\") " Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.759588 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume" (OuterVolumeSpecName: "config-volume") pod "604a5ea1-fb17-44e8-9c63-30238fdea94d" (UID: "604a5ea1-fb17-44e8-9c63-30238fdea94d"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGidValue "" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.767284 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr" (OuterVolumeSpecName: "kube-api-access-wl4lr") pod "604a5ea1-fb17-44e8-9c63-30238fdea94d" (UID: "604a5ea1-fb17-44e8-9c63-30238fdea94d"). InnerVolumeSpecName "kube-api-access-wl4lr". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.782981 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "604a5ea1-fb17-44e8-9c63-30238fdea94d" (UID: "604a5ea1-fb17-44e8-9c63-30238fdea94d"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGidValue "" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.859598 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-wl4lr\" (UniqueName: \"kubernetes.io/projected/604a5ea1-fb17-44e8-9c63-30238fdea94d-kube-api-access-wl4lr\") on node \"crc\" DevicePath \"\"" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.859673 4725 reconciler_common.go:293] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/604a5ea1-fb17-44e8-9c63-30238fdea94d-config-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:15:03 crc kubenswrapper[4725]: I0120 12:15:03.859686 4725 reconciler_common.go:293] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/604a5ea1-fb17-44e8-9c63-30238fdea94d-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.370865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" event={"ID":"604a5ea1-fb17-44e8-9c63-30238fdea94d","Type":"ContainerDied","Data":"917d494b66019293ca66267c446d95a9639ed0de12bcb3eac631abc66f0d47a7"} Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.370926 4725 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="917d494b66019293ca66267c446d95a9639ed0de12bcb3eac631abc66f0d47a7" Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.370954 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29481855-bbzhl" Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.721625 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.731354 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29481810-txmbc"] Jan 20 12:15:04 crc kubenswrapper[4725]: I0120 12:15:04.943984 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0fdb152c-7b26-4ed6-8bb8-6a846224c67b" path="/var/lib/kubelet/pods/0fdb152c-7b26-4ed6-8bb8-6a846224c67b/volumes" Jan 20 12:15:18 crc kubenswrapper[4725]: I0120 12:15:18.050393 4725 scope.go:117] "RemoveContainer" containerID="19fb964594f75fcdba986836c9a966bf2aa65e41d99e7666a933d08acb12b332" Jan 20 12:15:56 crc kubenswrapper[4725]: I0120 12:15:56.729464 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:15:56 crc kubenswrapper[4725]: I0120 12:15:56.730323 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:16:26 crc kubenswrapper[4725]: I0120 12:16:26.727722 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:16:26 crc kubenswrapper[4725]: I0120 12:16:26.728567 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.728229 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.728971 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.729057 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.730006 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:16:56 crc kubenswrapper[4725]: I0120 12:16:56.730303 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6" gracePeriod=600 Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.601821 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6" exitCode=0 Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.601865 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6"} Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.602385 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d"} Jan 20 12:16:57 crc kubenswrapper[4725]: I0120 12:16:57.602450 4725 scope.go:117] "RemoveContainer" containerID="1f3012fc9c5a31745976a69dcdd68ea519e7247f6f7d11dfdfdb769831a8d09b" Jan 20 12:18:18 crc kubenswrapper[4725]: I0120 12:18:18.195578 4725 scope.go:117] "RemoveContainer" containerID="fe55a71fbad73183443a4a97a48dac2d17433bc1a0a0447ea38989d6cb15d0e4" Jan 20 12:18:18 crc kubenswrapper[4725]: I0120 12:18:18.224440 4725 scope.go:117] "RemoveContainer" containerID="12fc0d4b7a6b440d05aae65bbaf75415b33cc1b772ffbbdf7c18502d8fa4db78" Jan 20 12:18:18 crc kubenswrapper[4725]: I0120 12:18:18.244733 4725 scope.go:117] "RemoveContainer" containerID="b11d2d8a1b0606ecc18cd1499a12a7672ace55137edbf153607ef35e8279f66f" Jan 20 12:19:26 crc kubenswrapper[4725]: I0120 12:19:26.727898 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:19:26 crc kubenswrapper[4725]: I0120 12:19:26.730467 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:19:56 crc kubenswrapper[4725]: I0120 12:19:56.728727 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:19:56 crc kubenswrapper[4725]: I0120 12:19:56.729898 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:20:18 crc kubenswrapper[4725]: I0120 12:20:18.367196 4725 scope.go:117] "RemoveContainer" containerID="e299d57fe3b730427479aa74a338c6276dc2a93442c8ef04ec170d411d8ae033" Jan 20 12:20:18 crc kubenswrapper[4725]: I0120 12:20:18.424019 4725 scope.go:117] "RemoveContainer" containerID="8b650c3f884771f6b8012af8c700a2a9c63c439a2436778c0694ae94e31d1bf3" Jan 20 12:20:18 crc kubenswrapper[4725]: I0120 12:20:18.475890 4725 scope.go:117] "RemoveContainer" containerID="4ea67bd5b9c937b7b33de544db8c146850cf32b44561494682f5dae6c6225a49" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.174701 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:23 crc kubenswrapper[4725]: E0120 12:20:23.175046 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerName="collect-profiles" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.175073 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerName="collect-profiles" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.175269 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="604a5ea1-fb17-44e8-9c63-30238fdea94d" containerName="collect-profiles" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.175842 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.190406 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.371488 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"infrawatch-operators-lp67f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.474656 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"infrawatch-operators-lp67f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.496353 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"infrawatch-operators-lp67f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.510191 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.953239 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:23 crc kubenswrapper[4725]: I0120 12:20:23.975692 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:20:24 crc kubenswrapper[4725]: I0120 12:20:24.203540 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerStarted","Data":"9e145c53b4b86d35224ef71ce470b3f0b816b43113b52c7bf1cdd1fa40715647"} Jan 20 12:20:25 crc kubenswrapper[4725]: I0120 12:20:25.214490 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerStarted","Data":"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f"} Jan 20 12:20:25 crc kubenswrapper[4725]: I0120 12:20:25.241658 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-lp67f" podStartSLOduration=2.098983361 podStartE2EDuration="2.241626331s" podCreationTimestamp="2026-01-20 12:20:23 +0000 UTC" firstStartedPulling="2026-01-20 12:20:23.975274346 +0000 UTC m=+4552.183596319" lastFinishedPulling="2026-01-20 12:20:24.117917316 +0000 UTC m=+4552.326239289" observedRunningTime="2026-01-20 12:20:25.2352512 +0000 UTC m=+4553.443573193" watchObservedRunningTime="2026-01-20 12:20:25.241626331 +0000 UTC m=+4553.449948294" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.728126 4725 patch_prober.go:28] interesting pod/machine-config-daemon-z2gv8 container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.728616 4725 prober.go:107] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.728707 4725 kubelet.go:2542] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.729609 4725 kuberuntime_manager.go:1027] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d"} pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 20 12:20:26 crc kubenswrapper[4725]: I0120 12:20:26.729679 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerName="machine-config-daemon" containerID="cri-o://08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" gracePeriod=600 Jan 20 12:20:26 crc kubenswrapper[4725]: E0120 12:20:26.854336 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.234845 4725 generic.go:334] "Generic (PLEG): container finished" podID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" exitCode=0 Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.235245 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerDied","Data":"08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d"} Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.235404 4725 scope.go:117] "RemoveContainer" containerID="99897f9bf3c4270c0a3d94baa343e5cd6db1247874cead15edc920c14058a7e6" Jan 20 12:20:27 crc kubenswrapper[4725]: I0120 12:20:27.236222 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:20:27 crc kubenswrapper[4725]: E0120 12:20:27.236533 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:20:33 crc kubenswrapper[4725]: I0120 12:20:33.510454 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:33 crc kubenswrapper[4725]: I0120 12:20:33.511300 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:33 crc kubenswrapper[4725]: I0120 12:20:33.540959 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:34 crc kubenswrapper[4725]: I0120 12:20:34.323243 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:34 crc kubenswrapper[4725]: I0120 12:20:34.364846 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:36 crc kubenswrapper[4725]: I0120 12:20:36.310570 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-lp67f" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" containerID="cri-o://4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" gracePeriod=2 Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.262669 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.319735 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") pod \"7640ce90-ea6e-4f5c-af78-5502daee755f\" (UID: \"7640ce90-ea6e-4f5c-af78-5502daee755f\") " Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322470 4725 generic.go:334] "Generic (PLEG): container finished" podID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" exitCode=0 Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322511 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerDied","Data":"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f"} Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322543 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-lp67f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322577 4725 scope.go:117] "RemoveContainer" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.322564 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-lp67f" event={"ID":"7640ce90-ea6e-4f5c-af78-5502daee755f","Type":"ContainerDied","Data":"9e145c53b4b86d35224ef71ce470b3f0b816b43113b52c7bf1cdd1fa40715647"} Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.328884 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz" (OuterVolumeSpecName: "kube-api-access-9zckz") pod "7640ce90-ea6e-4f5c-af78-5502daee755f" (UID: "7640ce90-ea6e-4f5c-af78-5502daee755f"). InnerVolumeSpecName "kube-api-access-9zckz". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.371923 4725 scope.go:117] "RemoveContainer" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" Jan 20 12:20:37 crc kubenswrapper[4725]: E0120 12:20:37.372519 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f\": container with ID starting with 4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f not found: ID does not exist" containerID="4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.372557 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f"} err="failed to get container status \"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f\": rpc error: code = NotFound desc = could not find container \"4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f\": container with ID starting with 4713ab8c83320dfd043da22c541ce332cbede107df0f43c2c151d7229739291f not found: ID does not exist" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.421101 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-9zckz\" (UniqueName: \"kubernetes.io/projected/7640ce90-ea6e-4f5c-af78-5502daee755f-kube-api-access-9zckz\") on node \"crc\" DevicePath \"\"" Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.656146 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:37 crc kubenswrapper[4725]: I0120 12:20:37.663102 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-lp67f"] Jan 20 12:20:38 crc kubenswrapper[4725]: I0120 12:20:38.943398 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" path="/var/lib/kubelet/pods/7640ce90-ea6e-4f5c-af78-5502daee755f/volumes" Jan 20 12:20:41 crc kubenswrapper[4725]: I0120 12:20:41.933321 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:20:41 crc kubenswrapper[4725]: E0120 12:20:41.933801 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:20:54 crc kubenswrapper[4725]: I0120 12:20:54.933022 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:20:54 crc kubenswrapper[4725]: E0120 12:20:54.934040 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.836263 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:01 crc kubenswrapper[4725]: E0120 12:21:01.837428 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.837448 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.837698 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="7640ce90-ea6e-4f5c-af78-5502daee755f" containerName="registry-server" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.839033 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.867662 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.997756 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.997986 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:01 crc kubenswrapper[4725]: I0120 12:21:01.998215 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.099673 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.099787 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.099842 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.100591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.100591 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.137459 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"redhat-operators-658h8\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.173695 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.622931 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:02 crc kubenswrapper[4725]: I0120 12:21:02.671776 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerStarted","Data":"bcf2688f9c9fd6b567706ba480c4ac8ffd3d7103e7a910e08e35b300f702ec49"} Jan 20 12:21:03 crc kubenswrapper[4725]: I0120 12:21:03.683381 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" exitCode=0 Jan 20 12:21:03 crc kubenswrapper[4725]: I0120 12:21:03.683470 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea"} Jan 20 12:21:04 crc kubenswrapper[4725]: I0120 12:21:04.692348 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerStarted","Data":"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95"} Jan 20 12:21:07 crc kubenswrapper[4725]: I0120 12:21:07.720386 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" exitCode=0 Jan 20 12:21:07 crc kubenswrapper[4725]: I0120 12:21:07.720629 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95"} Jan 20 12:21:07 crc kubenswrapper[4725]: I0120 12:21:07.933394 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:07 crc kubenswrapper[4725]: E0120 12:21:07.933716 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:09 crc kubenswrapper[4725]: I0120 12:21:09.746119 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerStarted","Data":"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a"} Jan 20 12:21:09 crc kubenswrapper[4725]: I0120 12:21:09.791364 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-658h8" podStartSLOduration=3.474937984 podStartE2EDuration="8.791280282s" podCreationTimestamp="2026-01-20 12:21:01 +0000 UTC" firstStartedPulling="2026-01-20 12:21:03.685424384 +0000 UTC m=+4591.893746367" lastFinishedPulling="2026-01-20 12:21:09.001766692 +0000 UTC m=+4597.210088665" observedRunningTime="2026-01-20 12:21:09.782401072 +0000 UTC m=+4597.990723045" watchObservedRunningTime="2026-01-20 12:21:09.791280282 +0000 UTC m=+4597.999602255" Jan 20 12:21:12 crc kubenswrapper[4725]: I0120 12:21:12.176024 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:12 crc kubenswrapper[4725]: I0120 12:21:12.176631 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:13 crc kubenswrapper[4725]: I0120 12:21:13.248609 4725 prober.go:107] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-658h8" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" probeResult="failure" output=< Jan 20 12:21:13 crc kubenswrapper[4725]: timeout: failed to connect service ":50051" within 1s Jan 20 12:21:13 crc kubenswrapper[4725]: > Jan 20 12:21:21 crc kubenswrapper[4725]: I0120 12:21:21.933260 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:21 crc kubenswrapper[4725]: E0120 12:21:21.934748 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:22 crc kubenswrapper[4725]: I0120 12:21:22.239544 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:22 crc kubenswrapper[4725]: I0120 12:21:22.286633 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:22 crc kubenswrapper[4725]: I0120 12:21:22.482187 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:23 crc kubenswrapper[4725]: I0120 12:21:23.984914 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-658h8" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" containerID="cri-o://19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" gracePeriod=2 Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.510646 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.640005 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") pod \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.640200 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") pod \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.640269 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") pod \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\" (UID: \"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6\") " Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.641629 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities" (OuterVolumeSpecName: "utilities") pod "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" (UID: "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.648882 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp" (OuterVolumeSpecName: "kube-api-access-w6bkp") pod "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" (UID: "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6"). InnerVolumeSpecName "kube-api-access-w6bkp". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.742145 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.742188 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-w6bkp\" (UniqueName: \"kubernetes.io/projected/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-kube-api-access-w6bkp\") on node \"crc\" DevicePath \"\"" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.778584 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" (UID: "dc5e28f2-6c79-46db-9cb4-33a9ff1827c6"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.843467 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997506 4725 generic.go:334] "Generic (PLEG): container finished" podID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" exitCode=0 Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997582 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a"} Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997628 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-658h8" event={"ID":"dc5e28f2-6c79-46db-9cb4-33a9ff1827c6","Type":"ContainerDied","Data":"bcf2688f9c9fd6b567706ba480c4ac8ffd3d7103e7a910e08e35b300f702ec49"} Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997649 4725 scope.go:117] "RemoveContainer" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" Jan 20 12:21:24 crc kubenswrapper[4725]: I0120 12:21:24.997664 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-658h8" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.037749 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.040676 4725 scope.go:117] "RemoveContainer" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.042593 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-658h8"] Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.072237 4725 scope.go:117] "RemoveContainer" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.108922 4725 scope.go:117] "RemoveContainer" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" Jan 20 12:21:25 crc kubenswrapper[4725]: E0120 12:21:25.109648 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a\": container with ID starting with 19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a not found: ID does not exist" containerID="19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.109687 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a"} err="failed to get container status \"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a\": rpc error: code = NotFound desc = could not find container \"19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a\": container with ID starting with 19213318ef2e19abe5bed2a0f853314af7871cbe2cfdd9e41b3e5a921bb77d2a not found: ID does not exist" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.109722 4725 scope.go:117] "RemoveContainer" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" Jan 20 12:21:25 crc kubenswrapper[4725]: E0120 12:21:25.110411 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95\": container with ID starting with 009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95 not found: ID does not exist" containerID="009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.110482 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95"} err="failed to get container status \"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95\": rpc error: code = NotFound desc = could not find container \"009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95\": container with ID starting with 009c81d40f1ac9267990ff18f4fb7472f1e59786baf9033867f08705f20aab95 not found: ID does not exist" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.110532 4725 scope.go:117] "RemoveContainer" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" Jan 20 12:21:25 crc kubenswrapper[4725]: E0120 12:21:25.111036 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea\": container with ID starting with 51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea not found: ID does not exist" containerID="51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea" Jan 20 12:21:25 crc kubenswrapper[4725]: I0120 12:21:25.111133 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea"} err="failed to get container status \"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea\": rpc error: code = NotFound desc = could not find container \"51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea\": container with ID starting with 51a320466ed1a267503b777f64e72ff7724df04eae7d239547d6d3437a1333ea not found: ID does not exist" Jan 20 12:21:26 crc kubenswrapper[4725]: I0120 12:21:26.966262 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" path="/var/lib/kubelet/pods/dc5e28f2-6c79-46db-9cb4-33a9ff1827c6/volumes" Jan 20 12:21:34 crc kubenswrapper[4725]: I0120 12:21:34.934038 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:34 crc kubenswrapper[4725]: E0120 12:21:34.936899 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:48 crc kubenswrapper[4725]: I0120 12:21:48.936937 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:48 crc kubenswrapper[4725]: E0120 12:21:48.937899 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:21:59 crc kubenswrapper[4725]: I0120 12:21:59.933294 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:21:59 crc kubenswrapper[4725]: E0120 12:21:59.934284 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:11 crc kubenswrapper[4725]: I0120 12:22:11.933132 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:11 crc kubenswrapper[4725]: E0120 12:22:11.934099 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.373034 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:19 crc kubenswrapper[4725]: E0120 12:22:19.374468 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-content" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374489 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-content" Jan 20 12:22:19 crc kubenswrapper[4725]: E0120 12:22:19.374533 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-utilities" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374543 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="extract-utilities" Jan 20 12:22:19 crc kubenswrapper[4725]: E0120 12:22:19.374584 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374595 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.374903 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="dc5e28f2-6c79-46db-9cb4-33a9ff1827c6" containerName="registry-server" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.376753 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.382238 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.545902 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.545987 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.546052 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.648716 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.648849 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.648910 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.649674 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.649998 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.688390 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"certified-operators-w8p54\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:19 crc kubenswrapper[4725]: I0120 12:22:19.721579 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.127547 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.529703 4725 generic.go:334] "Generic (PLEG): container finished" podID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" exitCode=0 Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.529765 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255"} Jan 20 12:22:20 crc kubenswrapper[4725]: I0120 12:22:20.529798 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerStarted","Data":"f1b75603941b80db17c2ee1dc8d105b359287d074919b28b90768bb82fd3ba6f"} Jan 20 12:22:22 crc kubenswrapper[4725]: I0120 12:22:22.550921 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerStarted","Data":"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846"} Jan 20 12:22:23 crc kubenswrapper[4725]: I0120 12:22:23.595264 4725 generic.go:334] "Generic (PLEG): container finished" podID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" exitCode=0 Jan 20 12:22:23 crc kubenswrapper[4725]: I0120 12:22:23.595666 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846"} Jan 20 12:22:24 crc kubenswrapper[4725]: I0120 12:22:24.611989 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerStarted","Data":"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78"} Jan 20 12:22:24 crc kubenswrapper[4725]: I0120 12:22:24.727973 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-w8p54" podStartSLOduration=2.112529183 podStartE2EDuration="5.727939014s" podCreationTimestamp="2026-01-20 12:22:19 +0000 UTC" firstStartedPulling="2026-01-20 12:22:20.531948155 +0000 UTC m=+4668.740270128" lastFinishedPulling="2026-01-20 12:22:24.147357966 +0000 UTC m=+4672.355679959" observedRunningTime="2026-01-20 12:22:24.720791308 +0000 UTC m=+4672.929113291" watchObservedRunningTime="2026-01-20 12:22:24.727939014 +0000 UTC m=+4672.936260997" Jan 20 12:22:26 crc kubenswrapper[4725]: I0120 12:22:26.933745 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:26 crc kubenswrapper[4725]: E0120 12:22:26.934542 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:29 crc kubenswrapper[4725]: I0120 12:22:29.722554 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:29 crc kubenswrapper[4725]: I0120 12:22:29.723129 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:29 crc kubenswrapper[4725]: I0120 12:22:29.782170 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:30 crc kubenswrapper[4725]: I0120 12:22:30.904176 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:31 crc kubenswrapper[4725]: I0120 12:22:31.001794 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:32 crc kubenswrapper[4725]: I0120 12:22:32.696283 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-w8p54" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" containerID="cri-o://4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" gracePeriod=2 Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.152126 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.205651 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") pod \"fdab8aea-b316-46bd-8ef3-419256bf52ae\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.206490 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") pod \"fdab8aea-b316-46bd-8ef3-419256bf52ae\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.206544 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") pod \"fdab8aea-b316-46bd-8ef3-419256bf52ae\" (UID: \"fdab8aea-b316-46bd-8ef3-419256bf52ae\") " Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.208014 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities" (OuterVolumeSpecName: "utilities") pod "fdab8aea-b316-46bd-8ef3-419256bf52ae" (UID: "fdab8aea-b316-46bd-8ef3-419256bf52ae"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.221994 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8" (OuterVolumeSpecName: "kube-api-access-qkzt8") pod "fdab8aea-b316-46bd-8ef3-419256bf52ae" (UID: "fdab8aea-b316-46bd-8ef3-419256bf52ae"). InnerVolumeSpecName "kube-api-access-qkzt8". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.257371 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "fdab8aea-b316-46bd-8ef3-419256bf52ae" (UID: "fdab8aea-b316-46bd-8ef3-419256bf52ae"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.308418 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.308471 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/fdab8aea-b316-46bd-8ef3-419256bf52ae-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.308485 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-qkzt8\" (UniqueName: \"kubernetes.io/projected/fdab8aea-b316-46bd-8ef3-419256bf52ae-kube-api-access-qkzt8\") on node \"crc\" DevicePath \"\"" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821591 4725 generic.go:334] "Generic (PLEG): container finished" podID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" exitCode=0 Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821709 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-w8p54" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821708 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78"} Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821934 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-w8p54" event={"ID":"fdab8aea-b316-46bd-8ef3-419256bf52ae","Type":"ContainerDied","Data":"f1b75603941b80db17c2ee1dc8d105b359287d074919b28b90768bb82fd3ba6f"} Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.821983 4725 scope.go:117] "RemoveContainer" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.848844 4725 scope.go:117] "RemoveContainer" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.893406 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.900054 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-w8p54"] Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.909157 4725 scope.go:117] "RemoveContainer" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.934218 4725 scope.go:117] "RemoveContainer" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" Jan 20 12:22:34 crc kubenswrapper[4725]: E0120 12:22:34.934875 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78\": container with ID starting with 4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78 not found: ID does not exist" containerID="4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.934938 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78"} err="failed to get container status \"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78\": rpc error: code = NotFound desc = could not find container \"4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78\": container with ID starting with 4e62a70e8d0d88a5a800e1827d91617d0bc420b551e4ce1637feacf8f7477b78 not found: ID does not exist" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.934977 4725 scope.go:117] "RemoveContainer" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" Jan 20 12:22:34 crc kubenswrapper[4725]: E0120 12:22:34.935438 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846\": container with ID starting with 13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846 not found: ID does not exist" containerID="13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.935465 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846"} err="failed to get container status \"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846\": rpc error: code = NotFound desc = could not find container \"13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846\": container with ID starting with 13d8c192c4d2973411dbab6c89524b9429fbf7ae17fc1d9d0964e318ccae3846 not found: ID does not exist" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.935500 4725 scope.go:117] "RemoveContainer" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" Jan 20 12:22:34 crc kubenswrapper[4725]: E0120 12:22:34.935865 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255\": container with ID starting with 3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255 not found: ID does not exist" containerID="3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.935919 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255"} err="failed to get container status \"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255\": rpc error: code = NotFound desc = could not find container \"3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255\": container with ID starting with 3f8732f1e0d155a951b555422a3d9b59786136466baeeb54d35ed8f3fbf45255 not found: ID does not exist" Jan 20 12:22:34 crc kubenswrapper[4725]: I0120 12:22:34.944842 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" path="/var/lib/kubelet/pods/fdab8aea-b316-46bd-8ef3-419256bf52ae/volumes" Jan 20 12:22:40 crc kubenswrapper[4725]: I0120 12:22:40.935584 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:40 crc kubenswrapper[4725]: E0120 12:22:40.936601 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:22:54 crc kubenswrapper[4725]: I0120 12:22:54.932548 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:22:54 crc kubenswrapper[4725]: E0120 12:22:54.933529 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:09 crc kubenswrapper[4725]: I0120 12:23:09.934547 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:09 crc kubenswrapper[4725]: E0120 12:23:09.936053 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:20 crc kubenswrapper[4725]: I0120 12:23:20.933017 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:20 crc kubenswrapper[4725]: E0120 12:23:20.934187 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:31 crc kubenswrapper[4725]: I0120 12:23:31.932130 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:31 crc kubenswrapper[4725]: E0120 12:23:31.933271 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:43 crc kubenswrapper[4725]: I0120 12:23:43.933488 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:43 crc kubenswrapper[4725]: E0120 12:23:43.934703 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.917625 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:23:51 crc kubenswrapper[4725]: E0120 12:23:51.918794 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-content" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.918848 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-content" Jan 20 12:23:51 crc kubenswrapper[4725]: E0120 12:23:51.918887 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-utilities" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.918899 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="extract-utilities" Jan 20 12:23:51 crc kubenswrapper[4725]: E0120 12:23:51.918907 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.918916 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.919145 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="fdab8aea-b316-46bd-8ef3-419256bf52ae" containerName="registry-server" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.920635 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:51 crc kubenswrapper[4725]: I0120 12:23:51.925446 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.114579 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.114696 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.114787 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.216932 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217184 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217242 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217559 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.217901 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.252478 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"community-operators-2g954\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.544396 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:23:52 crc kubenswrapper[4725]: I0120 12:23:52.848526 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:23:53 crc kubenswrapper[4725]: I0120 12:23:53.146880 4725 generic.go:334] "Generic (PLEG): container finished" podID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" exitCode=0 Jan 20 12:23:53 crc kubenswrapper[4725]: I0120 12:23:53.146947 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f"} Jan 20 12:23:53 crc kubenswrapper[4725]: I0120 12:23:53.147020 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerStarted","Data":"62b46a944066ca300e9e0e9f1441b3c5d70a48ee5cec6affb2a56a533f232b74"} Jan 20 12:23:54 crc kubenswrapper[4725]: I0120 12:23:54.933207 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:23:54 crc kubenswrapper[4725]: E0120 12:23:54.933807 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:23:55 crc kubenswrapper[4725]: I0120 12:23:55.175380 4725 generic.go:334] "Generic (PLEG): container finished" podID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" exitCode=0 Jan 20 12:23:55 crc kubenswrapper[4725]: I0120 12:23:55.175446 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b"} Jan 20 12:23:56 crc kubenswrapper[4725]: I0120 12:23:56.198401 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerStarted","Data":"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef"} Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.544821 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.545963 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.606876 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:02 crc kubenswrapper[4725]: I0120 12:24:02.641456 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2g954" podStartSLOduration=8.899162263000001 podStartE2EDuration="11.641420495s" podCreationTimestamp="2026-01-20 12:23:51 +0000 UTC" firstStartedPulling="2026-01-20 12:23:53.149487472 +0000 UTC m=+4761.357809445" lastFinishedPulling="2026-01-20 12:23:55.891745704 +0000 UTC m=+4764.100067677" observedRunningTime="2026-01-20 12:23:56.228312873 +0000 UTC m=+4764.436634846" watchObservedRunningTime="2026-01-20 12:24:02.641420495 +0000 UTC m=+4770.849742468" Jan 20 12:24:03 crc kubenswrapper[4725]: I0120 12:24:03.390665 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:03 crc kubenswrapper[4725]: I0120 12:24:03.447351 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:24:05 crc kubenswrapper[4725]: I0120 12:24:05.305577 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2g954" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" containerID="cri-o://3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" gracePeriod=2 Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.386888 4725 generic.go:334] "Generic (PLEG): container finished" podID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" exitCode=0 Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387367 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387483 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef"} Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387543 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2g954" event={"ID":"077a41f9-bfcb-47c4-b8de-f003ae7384ca","Type":"ContainerDied","Data":"62b46a944066ca300e9e0e9f1441b3c5d70a48ee5cec6affb2a56a533f232b74"} Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.387575 4725 scope.go:117] "RemoveContainer" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.394404 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") pod \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.394462 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") pod \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.394571 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") pod \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\" (UID: \"077a41f9-bfcb-47c4-b8de-f003ae7384ca\") " Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.395927 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities" (OuterVolumeSpecName: "utilities") pod "077a41f9-bfcb-47c4-b8de-f003ae7384ca" (UID: "077a41f9-bfcb-47c4-b8de-f003ae7384ca"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.404583 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn" (OuterVolumeSpecName: "kube-api-access-xrkdn") pod "077a41f9-bfcb-47c4-b8de-f003ae7384ca" (UID: "077a41f9-bfcb-47c4-b8de-f003ae7384ca"). InnerVolumeSpecName "kube-api-access-xrkdn". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.421694 4725 scope.go:117] "RemoveContainer" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.450936 4725 scope.go:117] "RemoveContainer" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.472623 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "077a41f9-bfcb-47c4-b8de-f003ae7384ca" (UID: "077a41f9-bfcb-47c4-b8de-f003ae7384ca"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGidValue "" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.480572 4725 scope.go:117] "RemoveContainer" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" Jan 20 12:24:06 crc kubenswrapper[4725]: E0120 12:24:06.481359 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef\": container with ID starting with 3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef not found: ID does not exist" containerID="3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.481418 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef"} err="failed to get container status \"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef\": rpc error: code = NotFound desc = could not find container \"3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef\": container with ID starting with 3e963a192cb36864a3155cb1f13e9e728e408861ae464dffd251028226ffc8ef not found: ID does not exist" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.481461 4725 scope.go:117] "RemoveContainer" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" Jan 20 12:24:06 crc kubenswrapper[4725]: E0120 12:24:06.482166 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b\": container with ID starting with a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b not found: ID does not exist" containerID="a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.482233 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b"} err="failed to get container status \"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b\": rpc error: code = NotFound desc = could not find container \"a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b\": container with ID starting with a414b4b59ea9298ce462082f2d199ba70cb11b944a0bae058bbc6369c7d3ec2b not found: ID does not exist" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.482267 4725 scope.go:117] "RemoveContainer" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" Jan 20 12:24:06 crc kubenswrapper[4725]: E0120 12:24:06.482731 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f\": container with ID starting with a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f not found: ID does not exist" containerID="a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.482761 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f"} err="failed to get container status \"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f\": rpc error: code = NotFound desc = could not find container \"a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f\": container with ID starting with a4749ab88b1c01774924899735541301334d280e33714a0cbd4ff7290d8a667f not found: ID does not exist" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.496370 4725 reconciler_common.go:293] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.496402 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-xrkdn\" (UniqueName: \"kubernetes.io/projected/077a41f9-bfcb-47c4-b8de-f003ae7384ca-kube-api-access-xrkdn\") on node \"crc\" DevicePath \"\"" Jan 20 12:24:06 crc kubenswrapper[4725]: I0120 12:24:06.496418 4725 reconciler_common.go:293] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/077a41f9-bfcb-47c4-b8de-f003ae7384ca-utilities\") on node \"crc\" DevicePath \"\"" Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.401535 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2g954" Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.438052 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.445398 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2g954"] Jan 20 12:24:07 crc kubenswrapper[4725]: I0120 12:24:07.932825 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:07 crc kubenswrapper[4725]: E0120 12:24:07.934805 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:08 crc kubenswrapper[4725]: I0120 12:24:08.948365 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" path="/var/lib/kubelet/pods/077a41f9-bfcb-47c4-b8de-f003ae7384ca/volumes" Jan 20 12:24:18 crc kubenswrapper[4725]: I0120 12:24:18.933973 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:18 crc kubenswrapper[4725]: E0120 12:24:18.934981 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:31 crc kubenswrapper[4725]: I0120 12:24:31.933863 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:31 crc kubenswrapper[4725]: E0120 12:24:31.935255 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:42 crc kubenswrapper[4725]: I0120 12:24:42.940526 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:42 crc kubenswrapper[4725]: E0120 12:24:42.941808 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:24:56 crc kubenswrapper[4725]: I0120 12:24:56.935760 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:24:56 crc kubenswrapper[4725]: E0120 12:24:56.937160 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:25:10 crc kubenswrapper[4725]: I0120 12:25:10.932980 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:25:10 crc kubenswrapper[4725]: E0120 12:25:10.934349 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:25:23 crc kubenswrapper[4725]: I0120 12:25:23.933986 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:25:23 crc kubenswrapper[4725]: E0120 12:25:23.934781 4725 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-z2gv8_openshift-machine-config-operator(6a4c10a0-687d-4b24-b1a9-5aba619c0668)\"" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" podUID="6a4c10a0-687d-4b24-b1a9-5aba619c0668" Jan 20 12:25:34 crc kubenswrapper[4725]: I0120 12:25:34.932663 4725 scope.go:117] "RemoveContainer" containerID="08ad00dd57f57f350563a5ca52aed9c5204ca108d3767510058076548a40c81d" Jan 20 12:25:35 crc kubenswrapper[4725]: I0120 12:25:35.576506 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-z2gv8" event={"ID":"6a4c10a0-687d-4b24-b1a9-5aba619c0668","Type":"ContainerStarted","Data":"adcc73ceecbc4583b032a69bd929a281091ea5ff89f855bfb4e2fea34e05779a"} Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.239297 4725 kubelet.go:2421] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:25:51 crc kubenswrapper[4725]: E0120 12:25:51.240606 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240635 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" Jan 20 12:25:51 crc kubenswrapper[4725]: E0120 12:25:51.240662 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-utilities" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240671 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-utilities" Jan 20 12:25:51 crc kubenswrapper[4725]: E0120 12:25:51.240695 4725 cpu_manager.go:410] "RemoveStaleState: removing container" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-content" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240704 4725 state_mem.go:107] "Deleted CPUSet assignment" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="extract-content" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.240904 4725 memory_manager.go:354] "RemoveStaleState removing state" podUID="077a41f9-bfcb-47c4-b8de-f003ae7384ca" containerName="registry-server" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.241697 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.260724 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.348105 4725 reconciler_common.go:245] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"infrawatch-operators-shf8t\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.449817 4725 reconciler_common.go:218] "operationExecutor.MountVolume started for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"infrawatch-operators-shf8t\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.512059 4725 operation_generator.go:637] "MountVolume.SetUp succeeded for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"infrawatch-operators-shf8t\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.566581 4725 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.987006 4725 kubelet.go:2428] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:25:51 crc kubenswrapper[4725]: W0120 12:25:51.995129 4725 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod171b1e77_c3d2_43eb_9915_3df404db0c2c.slice/crio-e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9 WatchSource:0}: Error finding container e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9: Status 404 returned error can't find the container with id e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9 Jan 20 12:25:51 crc kubenswrapper[4725]: I0120 12:25:51.998926 4725 provider.go:102] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 20 12:25:52 crc kubenswrapper[4725]: I0120 12:25:52.771151 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerStarted","Data":"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3"} Jan 20 12:25:52 crc kubenswrapper[4725]: I0120 12:25:52.771233 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerStarted","Data":"e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9"} Jan 20 12:25:52 crc kubenswrapper[4725]: I0120 12:25:52.816234 4725 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-shf8t" podStartSLOduration=1.692880178 podStartE2EDuration="1.816206659s" podCreationTimestamp="2026-01-20 12:25:51 +0000 UTC" firstStartedPulling="2026-01-20 12:25:51.99851853 +0000 UTC m=+4880.206840503" lastFinishedPulling="2026-01-20 12:25:52.121845011 +0000 UTC m=+4880.330166984" observedRunningTime="2026-01-20 12:25:52.7864352 +0000 UTC m=+4880.994757233" watchObservedRunningTime="2026-01-20 12:25:52.816206659 +0000 UTC m=+4881.024528642" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.582355 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.583262 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.642336 4725 kubelet.go:2542] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:01 crc kubenswrapper[4725]: I0120 12:26:01.891469 4725 kubelet.go:2542] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:02 crc kubenswrapper[4725]: I0120 12:26:02.007443 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:26:03 crc kubenswrapper[4725]: I0120 12:26:03.880828 4725 kuberuntime_container.go:808] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-shf8t" podUID="171b1e77-c3d2-43eb-9915-3df404db0c2c" containerName="registry-server" containerID="cri-o://f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" gracePeriod=2 Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.289336 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.451216 4725 reconciler_common.go:159] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") pod \"171b1e77-c3d2-43eb-9915-3df404db0c2c\" (UID: \"171b1e77-c3d2-43eb-9915-3df404db0c2c\") " Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.460512 4725 operation_generator.go:803] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k" (OuterVolumeSpecName: "kube-api-access-vtr9k") pod "171b1e77-c3d2-43eb-9915-3df404db0c2c" (UID: "171b1e77-c3d2-43eb-9915-3df404db0c2c"). InnerVolumeSpecName "kube-api-access-vtr9k". PluginName "kubernetes.io/projected", VolumeGidValue "" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.553236 4725 reconciler_common.go:293] "Volume detached for volume \"kube-api-access-vtr9k\" (UniqueName: \"kubernetes.io/projected/171b1e77-c3d2-43eb-9915-3df404db0c2c-kube-api-access-vtr9k\") on node \"crc\" DevicePath \"\"" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.896937 4725 generic.go:334] "Generic (PLEG): container finished" podID="171b1e77-c3d2-43eb-9915-3df404db0c2c" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" exitCode=0 Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.897132 4725 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-shf8t" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.897141 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerDied","Data":"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3"} Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.898415 4725 kubelet.go:2453] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-shf8t" event={"ID":"171b1e77-c3d2-43eb-9915-3df404db0c2c","Type":"ContainerDied","Data":"e62f7ec9321b7524516519a0b8d0c5e20e37217675a19308173d34e8810cb6e9"} Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.898460 4725 scope.go:117] "RemoveContainer" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.926694 4725 scope.go:117] "RemoveContainer" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" Jan 20 12:26:04 crc kubenswrapper[4725]: E0120 12:26:04.927385 4725 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3\": container with ID starting with f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3 not found: ID does not exist" containerID="f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.927444 4725 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3"} err="failed to get container status \"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3\": rpc error: code = NotFound desc = could not find container \"f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3\": container with ID starting with f04b395eadadf16eefbc40add407097d20367049759dfb4f665866647c2effd3 not found: ID does not exist" Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.956805 4725 kubelet.go:2437] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:26:04 crc kubenswrapper[4725]: I0120 12:26:04.965068 4725 kubelet.go:2431] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-shf8t"] Jan 20 12:26:06 crc kubenswrapper[4725]: I0120 12:26:06.947491 4725 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="171b1e77-c3d2-43eb-9915-3df404db0c2c" path="/var/lib/kubelet/pods/171b1e77-c3d2-43eb-9915-3df404db0c2c/volumes"